lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Fri, 2 Feb 2024 10:22:34 -0800
From: "T.J. Mercier" <tjmercier@...gle.com>
To: Michal Koutný <mkoutny@...e.com>
Cc: Efly Young <yangyifei03@...ishou.com>, hannes@...xchg.org, akpm@...ux-foundation.org, 
	android-mm@...gle.com, cgroups@...r.kernel.org, linux-kernel@...r.kernel.org, 
	linux-mm@...ck.org, mhocko@...nel.org, muchun.song@...ux.dev, 
	roman.gushchin@...ux.dev, shakeelb@...gle.com, yuzhao@...gle.com
Subject: Re: Re: [PATCH] mm: memcg: Use larger chunks for proactive reclaim

On Fri, Feb 2, 2024 at 2:15 AM Michal Koutný <mkoutny@...e.com> wrote:
>
> On Fri, Feb 02, 2024 at 01:02:47PM +0800, Efly Young <yangyifei03@...ishou.com> wrote:
> > > Looking at the code, I'm not quite sure if this can be read this
> > > literally. Efly might be able to elaborate, but we do a full loop of
> > > all nodes and cgroups in the tree before checking nr_to_reclaimed, and
> > > rely on priority level for granularity. So request size and complexity
> > > of the cgroup tree play a role. I don't know where the exact factor
> > > two would come from.
> >
> > I'm sorry that this conclusion may be arbitrary. It might just only suit
> > for my case. In my case, I traced it loop twice every time before checking
> > nr_reclaimed, and it reclaimed less than my request size(1G) every time.
> > So I think the upper bound is 2 * request. But now it seems that this is
> > related to cgroup tree I constucted and my system status and my request
> > size(a relatively large chunk). So there are many influencing factors,
> > a specific upper bound is not accurate.
>
> Alright, thanks for the background.
>
> > > IMO it's more accurate to phrase it like this:
> > >
> > > Reclaim tries to balance nr_to_reclaim fidelity with fairness across
> > > nodes and cgroups over which the pages are spread. As such, the bigger
> > > the request, the bigger the absolute overreclaim error. Historic
> > > in-kernel users of reclaim have used fixed, small request batches to
> > > approach an appropriate reclaim rate over time. When we reclaim a user
> > > request of arbitrary size, use decaying batches to manage error while
> > > maintaining reasonable throughput.
>
> Hm, decay...
> So shouldn't the formula be
>   nr_pages = delta <= SWAP_CLUSTER_MAX ? delta : (delta + 3*SWAP_CLUSTER_MAX) / 4
> where
>   delta = nr_to_reclaim - nr_reclaimed
> ?
> (So that convergence for smaller deltas is same like original- and other
> reclaims while conservative factor is applied for effectivity of higher
> user requests.)

Tapering out at 32 instead of 4 doesn't make much difference in
practice because of how far off the actually reclaimed amount can be
from the request size. We're talking thousands of pages of error for a
request size of a few megs, and hundreds of pages of error for
requests less than 100 pages.

So all of these should be more or less equivalent:
delta <= SWAP_CLUSTER_MAX ? delta : (delta + 3*SWAP_CLUSTER_MAX) / 4
max((nr_to_reclaim - nr_reclaimed) / 4, (nr_to_reclaim - nr_reclaimed) % 4)
(nr_to_reclaim - nr_reclaimed) / 4 + 4
(nr_to_reclaim - nr_reclaimed) / 4

I was just trying to avoid putting in a 0 for the request size with the mod.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ