[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20200630172713.496590a923744c0e0160d36b@linux-foundation.org>
Date: Tue, 30 Jun 2020 17:27:13 -0700
From: Andrew Morton <akpm@...ux-foundation.org>
To: Tim Chen <tim.c.chen@...ux.intel.com>
Cc: Matthew Wilcox <willy@...radead.org>,
Vladimir Davydov <vdavydov@...tuozzo.com>,
Johannes Weiner <hannes@...xchg.org>,
Michal Hocko <mhocko@...e.cz>,
Dave Hansen <dave.hansen@...el.com>,
Ying Huang <ying.huang@...el.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [Patch] mm: Increase pagevec size on large system
On Mon, 29 Jun 2020 09:57:42 -0700 Tim Chen <tim.c.chen@...ux.intel.com> wrote:
>
>
> I am okay with Matthew's suggestion of keeping the stack pagevec size unchanged.
> Andrew, do you have a preference?
>
> I was assuming that for people who really care about saving the kernel memory
> usage, they would make CONFIG_NR_CPUS small. I also have a hard time coming
> up with a better scheme.
>
> Otherwise, we will have to adjust the pagevec size when we actually
> found out how many CPUs we have brought online. It seems like a lot
> of added complexity for going that route.
Even if we were to do this, the worst-case stack usage on the largest
systems might be an issue. If it isn't then we might as well hard-wire
it to 31 elements anyway,
I dunno. An extra 128 bytes of stack doesn't sound toooo bad, and the
performance benefit is significant. Perhaps we just go with the
original patch. If there are any on-stack pagevecs in the page reclaim
path then perhaps we could create a new mini-pagevec for just those. or
look at simply removing the pagevec optimization in there altogether.
Powered by blists - more mailing lists