[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CALvZod7X3PsM2+ZrWXwb75FNBBjaBGJpjd+WVmzr5hStROvW+g@mail.gmail.com>
Date: Wed, 20 Jul 2022 10:49:53 -0700
From: Shakeel Butt <shakeelb@...gle.com>
To: Michal Hocko <mhocko@...e.com>
Cc: Yosry Ahmed <yosryahmed@...gle.com>,
Johannes Weiner <hannes@...xchg.org>,
Roman Gushchin <roman.gushchin@...ux.dev>,
Muchun Song <songmuchun@...edance.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Matthew Wilcox <willy@...radead.org>,
Vlastimil Babka <vbabka@...e.cz>,
David Hildenbrand <david@...hat.com>,
Miaohe Lin <linmiaohe@...wei.com>, NeilBrown <neilb@...e.de>,
Alistair Popple <apopple@...dia.com>,
Suren Baghdasaryan <surenb@...gle.com>,
Peter Xu <peterx@...hat.com>,
LKML <linux-kernel@...r.kernel.org>,
Cgroups <cgroups@...r.kernel.org>, Linux MM <linux-mm@...ck.org>
Subject: Re: [PATCH v4] mm: vmpressure: don't count proactive reclaim in vmpressure
On Wed, Jul 20, 2022 at 2:24 AM Michal Hocko <mhocko@...e.com> wrote:
>
[...]
>
> I think what we are missing here is
> - explain that this doesn't have any effect on existing users of
> vmpressure user interface because that is cgroup v1 and memory.reclaim
> is v2 feature. This is a trivial statement but quite useful for future
> readers of this commit
> - explain the effect on the networking layer and typical usecases
> memory.reclaim is used for currently and ideally document that.
I agree with the above two points (Yosry, please address those) but
the following third point is orthogonal and we don't really need to
have an answer for this patch to be accepted.
> - how are we going to deal with users who would really want to use
> memory.reclaim interface as a replacement for existing hard/high
> memory reclaim? Is that even something that the interface is intended
> for?
I do agree that this question is important. Nowadays I am looking at
this from a different perspective and use-case. More concretely how
(and why) to replace vmpressure based network throttling for cgroup
v2. I will start a separate thread for that discussion.
Powered by blists - more mailing lists