[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <efc76ea3-9a85-96a3-f3d7-212aeea7cf1c@suse.cz>
Date: Tue, 30 May 2023 10:32:04 +0200
From: Vlastimil Babka <vbabka@...e.cz>
To: Baolin Wang <baolin.wang@...ux.alibaba.com>,
akpm@...ux-foundation.org
Cc: mgorman@...hsingularity.net, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 6/6] mm: compaction: skip fast freepages isolation if
enough freepages are isolated
On 5/25/23 14:54, Baolin Wang wrote:
> I've observed that fast isolation often isolates more pages than
> cc->migratepages, and the excess freepages will be released back to the
> buddy system. So skip fast freepages isolation if enough freepages are
> isolated to save some CPU cycles.
>
> Signed-off-by: Baolin Wang <baolin.wang@...ux.alibaba.com>
Acked-by: Vlastimil Babka <vbabka@...e.cz>
> ---
> mm/compaction.c | 4 ++++
> 1 file changed, 4 insertions(+)
>
> diff --git a/mm/compaction.c b/mm/compaction.c
> index eccec84dae82..3ade4c095ed2 100644
> --- a/mm/compaction.c
> +++ b/mm/compaction.c
> @@ -1550,6 +1550,10 @@ static void fast_isolate_freepages(struct compact_control *cc)
>
> spin_unlock_irqrestore(&cc->zone->lock, flags);
>
> + /* Skip fast search if enough freepages isolated */
> + if (cc->nr_freepages >= cc->nr_migratepages)
> + break;
> +
> /*
> * Smaller scan on next order so the total scan is related
> * to freelist_scan_limit.
Powered by blists - more mailing lists