[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7f34e789-e01f-4929-a618-b73c04ebf4d2@suse.cz>
Date: Mon, 12 Feb 2024 11:32:30 +0100
From: Vlastimil Babka <vbabka@...e.cz>
To: Baolin Wang <baolin.wang@...ux.alibaba.com>, akpm@...ux-foundation.org,
Zi Yan <ziy@...dia.com>
Cc: mgorman@...hsingularity.net, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/2] mm: compaction: update the cc->nr_migratepages when
allocating or freeing the freepages
On 1/22/24 14:01, Baolin Wang wrote:
> Currently we will use 'cc->nr_freepages >= cc->nr_migratepages' comparison
> to ensure that enough freepages are isolated in isolate_freepages(), however
> it just decreases the cc->nr_freepages without updating cc->nr_migratepages
> in compaction_alloc(), which will waste more CPU cycles and cause too many
> freepages to be isolated.
Hm yeah I guess this can happen with fast_isolate_freepages() if it returns
with something but not all the freepages that are expected to be needed, and
then we get to isolate_freepages() again.
> So we should also update the cc->nr_migratepages when allocating or freeing
> the freepages to avoid isolating excess freepages. And I can see fewer free
> pages are scanned and isolated when running thpcompact on my Arm64 server:
> k6.7 k6.7_patched
> Ops Compaction pages isolated 120692036.00 118160797.00
> Ops Compaction migrate scanned 131210329.00 154093268.00
> Ops Compaction free scanned 1090587971.00 1080632536.00
> Ops Compact scan efficiency 12.03 14.26
>
> Moreover, I did not see an obvious latency improvements, this is likely because
> isolating freepages is not the bottleneck in the thpcompact test case.
> k6.7 k6.7_patched
> Amean fault-both-1 1089.76 ( 0.00%) 1080.16 * 0.88%*
> Amean fault-both-3 1616.48 ( 0.00%) 1636.65 * -1.25%*
> Amean fault-both-5 2266.66 ( 0.00%) 2219.20 * 2.09%*
> Amean fault-both-7 2909.84 ( 0.00%) 2801.90 * 3.71%*
> Amean fault-both-12 4861.26 ( 0.00%) 4733.25 * 2.63%*
> Amean fault-both-18 7351.11 ( 0.00%) 6950.51 * 5.45%*
> Amean fault-both-24 9059.30 ( 0.00%) 9159.99 * -1.11%*
> Amean fault-both-30 10685.68 ( 0.00%) 11399.02 * -6.68%*
>
> Signed-off-by: Baolin Wang <baolin.wang@...ux.alibaba.com>
> ---
> mm/compaction.c | 2 ++
> 1 file changed, 2 insertions(+)
>
> diff --git a/mm/compaction.c b/mm/compaction.c
> index 066b72b3471a..6c84e3a5b32b 100644
> --- a/mm/compaction.c
> +++ b/mm/compaction.c
> @@ -1779,6 +1779,7 @@ static struct folio *compaction_alloc(struct folio *src, unsigned long data)
> dst = list_entry(cc->freepages.next, struct folio, lru);
> list_del(&dst->lru);
> cc->nr_freepages--;
> + cc->nr_migratepages--;
This is breaking the tracepoint TRACE_EVENT(mm_compaction_migratepages)
which does
__entry->nr_failed = cc->nr_migratepages - nr_succeeded;
and is called after migrate_pages() finishes, so now this will underflow.
Probably need to get a snapshot of cc->nr_migratepages before calling
migrate_pages() and then feed that to the tracepoint instead of cc.
>
> return dst;
> }
> @@ -1794,6 +1795,7 @@ static void compaction_free(struct folio *dst, unsigned long data)
>
> list_add(&dst->lru, &cc->freepages);
> cc->nr_freepages++;
> + cc->nr_migratepages++;
> }
>
> /* possible outcome of isolate_migratepages */
Powered by blists - more mailing lists