lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 25 Apr 2022 12:31:51 -0700
From:   Yosry Ahmed <yosryahmed@...gle.com>
To:     David Rientjes <rientjes@...gle.com>
Cc:     Johannes Weiner <hannes@...xchg.org>,
        Michal Hocko <mhocko@...nel.org>,
        Shakeel Butt <shakeelb@...gle.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Roman Gushchin <roman.gushchin@...ux.dev>,
        Tejun Heo <tj@...nel.org>, Zefan Li <lizefan.x@...edance.com>,
        Jonathan Corbet <corbet@....net>,
        Shuah Khan <shuah@...nel.org>, Yu Zhao <yuzhao@...gle.com>,
        Dave Hansen <dave.hansen@...ux.intel.com>,
        Wei Xu <weixugc@...gle.com>, Greg Thelen <gthelen@...gle.com>,
        Chen Wandun <chenwandun@...wei.com>,
        Vaibhav Jain <vaibhav@...ux.ibm.com>,
        Michal Koutný <mkoutny@...e.com>,
        Tim Chen <tim.c.chen@...ux.intel.com>, cgroups@...r.kernel.org,
        linux-doc@...r.kernel.org,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        Linux-MM <linux-mm@...ck.org>, linux-kselftest@...r.kernel.org,
        Michal Hocko <mhocko@...e.com>
Subject: Re: [PATCH v5 1/4] memcg: introduce per-memcg reclaim interface

On Mon, Apr 25, 2022 at 12:15 PM David Rientjes <rientjes@...gle.com> wrote:
>
> On Mon, 25 Apr 2022, Yosry Ahmed wrote:
>
> > From: Shakeel Butt <shakeelb@...gle.com>
> >
> > Introduce a memcg interface to trigger memory reclaim on a memory cgroup.
> >
> > Use case: Proactive Reclaim
> > ---------------------------
> >
> > A userspace proactive reclaimer can continuously probe the memcg to
> > reclaim a small amount of memory. This gives more accurate and
> > up-to-date workingset estimation as the LRUs are continuously
> > sorted and can potentially provide more deterministic memory
> > overcommit behavior. The memory overcommit controller can provide
> > more proactive response to the changing behavior of the running
> > applications instead of being reactive.
> >
> > A userspace reclaimer's purpose in this case is not a complete replacement
> > for kswapd or direct reclaim, it is to proactively identify memory savings
> > opportunities and reclaim some amount of cold pages set by the policy
> > to free up the memory for more demanding jobs or scheduling new jobs.
> >
> > A user space proactive reclaimer is used in Google data centers.
> > Additionally, Meta's TMO paper recently referenced a very similar
> > interface used for user space proactive reclaim:
> > https://dl.acm.org/doi/pdf/10.1145/3503222.3507731
> >
> > Benefits of a user space reclaimer:
> > -----------------------------------
> >
> > 1) More flexible on who should be charged for the cpu of the memory
> > reclaim. For proactive reclaim, it makes more sense to be centralized.
> >
> > 2) More flexible on dedicating the resources (like cpu). The memory
> > overcommit controller can balance the cost between the cpu usage and
> > the memory reclaimed.
> >
> > 3) Provides a way to the applications to keep their LRUs sorted, so,
> > under memory pressure better reclaim candidates are selected. This also
> > gives more accurate and uptodate notion of working set for an
> > application.
> >
> > Why memory.high is not enough?
> > ------------------------------
> >
> > - memory.high can be used to trigger reclaim in a memcg and can
> >   potentially be used for proactive reclaim.
> >   However there is a big downside in using memory.high. It can potentially
> >   introduce high reclaim stalls in the target application as the
> >   allocations from the processes or the threads of the application can hit
> >   the temporary memory.high limit.
> >
> > - Userspace proactive reclaimers usually use feedback loops to decide
> >   how much memory to proactively reclaim from a workload. The metrics
> >   used for this are usually either refaults or PSI, and these metrics
> >   will become messy if the application gets throttled by hitting the
> >   high limit.
> >
> > - memory.high is a stateful interface, if the userspace proactive
> >   reclaimer crashes for any reason while triggering reclaim it can leave
> >   the application in a bad state.
> >
> > - If a workload is rapidly expanding, setting memory.high to proactively
> >   reclaim memory can result in actually reclaiming more memory than
> >   intended.
> >
> > The benefits of such interface and shortcomings of existing interface
> > were further discussed in this RFC thread:
> > https://lore.kernel.org/linux-mm/5df21376-7dd1-bf81-8414-32a73cea45dd@google.com/
> >
> > Interface:
> > ----------
> >
> > Introducing a very simple memcg interface 'echo 10M > memory.reclaim' to
> > trigger reclaim in the target memory cgroup.
> >
> > The interface is introduced as a nested-keyed file to allow for future
> > optional arguments to be easily added to configure the behavior of
> > reclaim.
> >
> > Possible Extensions:
> > --------------------
> >
> > - This interface can be extended with an additional parameter or flags
> >   to allow specifying one or more types of memory to reclaim from (e.g.
> >   file, anon, ..).
> >
> > - The interface can also be extended with a node mask to reclaim from
> >   specific nodes. This has use cases for reclaim-based demotion in memory
> >   tiering systens.
> >
> > - A similar per-node interface can also be added to support proactive
> >   reclaim and reclaim-based demotion in systems without memcg.
> >
> > - Add a timeout parameter to make it easier for user space to call the
> >   interface without worrying about being blocked for an undefined amount
> >   of time.
> >
> > For now, let's keep things simple by adding the basic functionality.
> >
> > [yosryahmed@...gle.com: worked on versions v2 onwards, refreshed to
> > current master, updated commit message based on recent
> > discussions and use cases]
> >
> > Signed-off-by: Shakeel Butt <shakeelb@...gle.com>
> > Co-developed-by: Yosry Ahmed <yosryahmed@...gle.com>
> > Signed-off-by: Yosry Ahmed <yosryahmed@...gle.com>
> > Acked-by: Johannes Weiner <hannes@...xchg.org>
> > Acked-by: Michal Hocko <mhocko@...e.com>
> > Acked-by: Wei Xu <weixugc@...gle.com>
> > Acked-by: Roman Gushchin <roman.gushchin@...ux.dev>
>
> Acked-by: David Rientjes <rientjes@...gle.com>
>
> "can over or under reclaim from the target cgroup" begs the question of
> how much more memory the kernel can decide to reclaim :)  I think it's
> assumed that it's minimal and that matches the current implementation that
> rounds up to SWAP_CLUSTER_MAX, though, so looks good.
>
> Thanks Yosry!

I think it could be more complex than this. Some functions that get
called during reclaim only use the nr_to_reclaim parameter to check if
they need one more iteration, but not to limit the actual reclaimed
pages per say. For example, nr_to_reclaim is not even passed to
shrink_slab() or mem_cgroup_soft_limit_reclaim(), so they have no way
to know that they should stop if nr_to_reclaim was already satisfied.
I think the general assumption is that each of these calls normally
does not reclaim a huge number of pages, so like you said, the kernel
should not over-reclaim too much. However, I don't think there are
guarantees about this.

Powered by blists - more mailing lists