[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.20.1803201514340.14003@chino.kir.corp.google.com>
Date: Tue, 20 Mar 2018 15:15:13 -0700 (PDT)
From: David Rientjes <rientjes@...gle.com>
To: Andrey Ryabinin <aryabinin@...tuozzo.com>
cc: Michal Hocko <mhocko@...nel.org>,
"Li,Rongqing" <lirongqing@...du.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"cgroups@...r.kernel.org" <cgroups@...r.kernel.org>,
"hannes@...xchg.org" <hannes@...xchg.org>
Subject: Re: 答复: 答复: [PATCH] mm/memcontrol.c: speed up to force empty a memory cgroup
On Wed, 21 Mar 2018, Andrey Ryabinin wrote:
> >>> It would probably be best to limit the
> >>> nr_pages to the amount that needs to be reclaimed, though, rather than
> >>> over reclaiming.
> >>
> >> How do you achieve that? The charging path is not synchornized with the
> >> shrinking one at all.
> >>
> >
> > The point is to get a better guess at how many pages, up to
> > SWAP_CLUSTER_MAX, that need to be reclaimed instead of 1.
> >
> >>> If you wanted to be invasive, you could change page_counter_limit() to
> >>> return the count - limit, fix up the callers that look for -EBUSY, and
> >>> then use max(val, SWAP_CLUSTER_MAX) as your nr_pages.
> >>
> >> I am not sure I understand
> >>
> >
> > Have page_counter_limit() return the number of pages over limit, i.e.
> > count - limit, since it compares the two anyway. Fix up existing callers
> > and then clamp that value to SWAP_CLUSTER_MAX in
> > mem_cgroup_resize_limit(). It's a more accurate guess than either 1 or
> > 1024.
> >
>
> JFYI, it's never 1, it's always SWAP_CLUSTER_MAX.
> See try_to_free_mem_cgroup_pages():
> ....
> struct scan_control sc = {
> .nr_to_reclaim = max(nr_pages, SWAP_CLUSTER_MAX),
>
Is SWAP_CLUSTER_MAX the best answer if I'm lowering the limit by 1GB?
Powered by blists - more mailing lists