[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20211206150421.fc06972fac949a5f6bc8b725@linux-foundation.org>
Date: Mon, 6 Dec 2021 15:04:21 -0800
From: Andrew Morton <akpm@...ux-foundation.org>
To: Minchan Kim <minchan@...nel.org>
Cc: Michal Hocko <mhocko@...e.com>,
David Hildenbrand <david@...hat.com>,
linux-mm <linux-mm@...ck.org>,
LKML <linux-kernel@...r.kernel.org>,
Suren Baghdasaryan <surenb@...gle.com>,
John Dias <joaodias@...gle.com>
Subject: Re: [PATCH] mm: don't call lru draining in the nested
lru_cache_disable
On Mon, 6 Dec 2021 14:10:06 -0800 Minchan Kim <minchan@...nel.org> wrote:
> lru_cache_disable involves IPIs to drain pagevec of each core,
> which sometimes takes quite long time to complete depending
> on cpu's business, which makes allocation too slow up to
> sveral hundredth milliseconds. Furthermore, the repeated draining
> in the alloc_contig_range makes thing worse considering caller
> of alloc_contig_range usually tries multiple times in the loop.
>
> This patch makes the lru_cache_disable aware of the fact the
> pagevec was already disabled. With that, user of alloc_contig_range
> can disable the lru cache in advance in their context during the
> repeated trial so they can avoid the multiple costly draining
> in cma allocation.
Isn't this racy?
> ...
>
> @@ -859,7 +869,12 @@ atomic_t lru_disable_count = ATOMIC_INIT(0);
> */
> void lru_cache_disable(void)
> {
> - atomic_inc(&lru_disable_count);
> + /*
> + * If someone is already disabled lru_cache, just return with
> + * increasing the lru_disable_count.
> + */
> + if (atomic_inc_not_zero(&lru_disable_count))
> + return;
> #ifdef CONFIG_SMP
> /*
> * lru_add_drain_all in the force mode will schedule draining on
> @@ -873,6 +888,7 @@ void lru_cache_disable(void)
> #else
> lru_add_and_bh_lrus_drain();
> #endif
There's a window here where lru_disable_count==0 and new pages can get
added to lru?
> + atomic_inc(&lru_disable_count);
> }
Powered by blists - more mailing lists