[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3f0ebd90-1aca-1dfc-3b92-bdb991d0fb29@intel.com>
Date: Fri, 21 May 2021 15:44:49 -0700
From: Dave Hansen <dave.hansen@...el.com>
To: Mel Gorman <mgorman@...hsingularity.net>,
Linux-MM <linux-mm@...ck.org>
Cc: Dave Hansen <dave.hansen@...ux.intel.com>,
Matthew Wilcox <willy@...radead.org>,
Vlastimil Babka <vbabka@...e.cz>,
Michal Hocko <mhocko@...nel.org>,
Nicholas Piggin <npiggin@...il.com>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 5/6] mm/page_alloc: Limit the number of pages on PCP lists
when reclaim is active
On 5/21/21 3:28 AM, Mel Gorman wrote:
> +static int nr_pcp_high(struct per_cpu_pages *pcp, struct zone *zone)
> +{
> + int high = READ_ONCE(pcp->high);
> +
> + if (unlikely(!high))
> + return 0;
> +
> + if (!test_bit(ZONE_RECLAIM_ACTIVE, &zone->flags))
> + return high;
> +
> + /*
> + * If reclaim is active, limit the number of pages that can be
> + * stored on pcp lists
> + */
> + return READ_ONCE(pcp->batch) << 2;
> +}
Should there be a sanity check on this? Let's say we had one of those
weirdo zones with tons of CPUs and a small low_wmark_pages(). Could we
have a case where:
pcp->high < pcp->batch<<2
and this effectively *raises* nr_pcp_high()?
It's not possible with the current pcp->high calculation, but does
anything prevent it now?
Powered by blists - more mailing lists