[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJD7tkYrF9OkfahXVqRMNo2-krrotjeY+Qp-pb9e1ebrFWS6PA@mail.gmail.com>
Date: Thu, 20 Jul 2023 17:07:14 -0700
From: Yosry Ahmed <yosryahmed@...gle.com>
To: Roman Gushchin <roman.gushchin@...ux.dev>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Johannes Weiner <hannes@...xchg.org>,
Michal Hocko <mhocko@...nel.org>,
Shakeel Butt <shakeelb@...gle.com>,
Muchun Song <muchun.song@...ux.dev>,
"Matthew Wilcox (Oracle)" <willy@...radead.org>,
Tejun Heo <tj@...nel.org>, Zefan Li <lizefan.x@...edance.com>,
Yu Zhao <yuzhao@...gle.com>,
Luis Chamberlain <mcgrof@...nel.org>,
Kees Cook <keescook@...omium.org>,
Iurii Zaikin <yzaikin@...gle.com>,
"T.J. Mercier" <tjmercier@...gle.com>,
Greg Thelen <gthelen@...gle.com>, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, cgroups@...r.kernel.org
Subject: Re: [RFC PATCH 0/8] memory recharging for offline memcgs
On Thu, Jul 20, 2023 at 5:02 PM Roman Gushchin <roman.gushchin@...ux.dev> wrote:
>
> On Thu, Jul 20, 2023 at 07:08:17AM +0000, Yosry Ahmed wrote:
> > This patch series implements the proposal in LSF/MM/BPF 2023 conference
> > for reducing offline/zombie memcgs by memory recharging [1]. The main
> > difference is that this series focuses on recharging and does not
> > include eviction of any memory charged to offline memcgs.
> >
> > Two methods of recharging are proposed:
> >
> > (a) Recharging of mapped folios.
> >
> > When a memcg is offlined, queue an asynchronous worker that will walk
> > the lruvec of the offline memcg and try to recharge any mapped folios to
> > the memcg of one of the processes mapping the folio. The main assumption
> > is that a process mapping the folio is the "rightful" owner of the
> > memory.
> >
> > Currently, this is only supported for evictable folios, as the
> > unevictable lru is imaginary and we cannot iterate the folios on it. A
> > separate proposal [2] was made to revive the unevictable lru, which
> > would allow recharging of unevictable folios.
> >
> > (b) Deferred recharging of folios.
> >
> > For folios that are unmapped, or mapped but we fail to recharge them
> > with (a), we rely on deferred recharging. Simply put, any time a folio
> > is accessed or dirtied by a userspace process, and that folio is charged
> > to an offline memcg, we will try to recharge it to the memcg of the
> > process accessing the folio. Again, we assume this process should be the
> > "rightful" owner of the memory. This is also done asynchronously to avoid
> > slowing down the data access path.
>
> Unfortunately I have to agree with Johannes, Tejun and others who are not big
> fans of this approach.
>
> Lazy recharging leads to an interesting phenomena: a memory usage of a running
> workload may suddenly go up only because some other workload is terminated and
> now it's memory is being recharged. I find it confusing. It also makes hard
> to set up limits and/or guarantees.
This can happen today.
If memcg A starts accessing some memory and gets charged for it, and
then memcg B also accesses it, it will not be charged for it. If at a
later point memcg A runs into reclaim, and the memory is freed, then
memcg B tries to access it, its usage will suddenly go up as well,
because some other workload experienced reclaim. This is a very
similar scenario, only instead of reclaim, the memcg was offlined. As
a matter of fact, it's common to try to free up a memcg before
removing it (by lowering the limit or using memory.reclaim). In that
case, the net result would be exactly the same -- with the difference
being that recharging will avoid freeing the memory and faulting it
back in.
>
> In general, I don't think we can handle shared memory well without getting rid
> of "whoever allocates a page, pays the full price" policy and making a shared
> ownership a fully supported concept. Of course, it's a huge work and I believe
> the only way we can achieve it is to compromise on the granularity of the
> accounting. Will the resulting system be better in the real life, it's hard to
> say in advance.
>
> Thanks!
Powered by blists - more mailing lists