[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4748f87e-0762-40fc-ab9e-577c9739066f@linux.ibm.com>
Date: Thu, 27 Jun 2024 11:30:26 +0530
From: Donet Tom <donettom@...ux.ibm.com>
To: David Hildenbrand <david@...hat.com>, linux-kernel@...r.kernel.org
Cc: linux-mm@...ck.org, Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [PATCH v1 2/2] mm/migrate: move NUMA hinting fault folio
isolation + checks under PTL
On 6/26/24 21:52, David Hildenbrand wrote:
> On 20.06.24 23:29, David Hildenbrand wrote:
>> Currently we always take a folio reference even if migration will not
>> even be tried or isolation failed, requiring us to grab+drop an
>> additional
>> reference.
>>
>> Further, we end up calling folio_likely_mapped_shared() while the folio
>> might have already been unmapped, because after we dropped the PTL, that
>> can easily happen. We want to stop touching mapcounts and friends from
>> such context, and only call folio_likely_mapped_shared() while the folio
>> is still mapped: mapcount information is pretty much stale and
>> unreliable
>> otherwise.
>>
>> So let's move checks into numamigrate_isolate_folio(), rename that
>> function to migrate_misplaced_folio_prepare(), and call that function
>> from callsites where we call migrate_misplaced_folio(), but still with
>> the PTL held.
>>
>> We can now stop taking temporary folio references, and really only take
>> a reference if folio isolation succeeded. Doing the
>> folio_likely_mapped_shared() + golio isolation under PT lock is now
>> similar
>> to how we handle MADV_PAGEOUT.
>>
>> While at it, combine the folio_is_file_lru() checks.
>>
>> Signed-off-by: David Hildenbrand <david@...hat.com>
>> ---
>
> Donet just reported an issue. I suspect this fixes it -- in any case,
> this is
> the right thing to do.
>
> From 0833b9896e98c8d88c521609c811a220d14a2e7e Mon Sep 17 00:00:00 2001
> From: David Hildenbrand <david@...hat.com>
> Date: Wed, 26 Jun 2024 18:14:44 +0200
> Subject: [PATCH] Fixup: mm/migrate: move NUMA hinting fault folio
> isolation +
> checks under PTL
>
> Donet reports an issue during NUMA migration we haven't seen previously:
>
> [ 71.422804] list_del corruption, c00c00000061b3c8->next is
> LIST_POISON1 (5deadbeef0000100)
> [ 71.422839] ------------[ cut here ]------------
> [ 71.422843] kernel BUG at lib/list_debug.c:56!
> [ 71.422850] Oops: Exception in kernel mode, sig: 5 [#1]
>
> We forgot to convert one "return 0;" to return an error instead from
> migrate_misplaced_folio_prepare() in case the target node is nearly
> full.
>
> Signed-off-by: David Hildenbrand <david@...hat.com>
> ---
> mm/migrate.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/mm/migrate.c b/mm/migrate.c
> index 8beedbb42a93..9ed43c1eea5e 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -2564,7 +2564,7 @@ int migrate_misplaced_folio_prepare(struct folio
> *folio,
> int z;
>
> if (!(sysctl_numa_balancing_mode &
> NUMA_BALANCING_MEMORY_TIERING))
> - return 0;
> + return -EAGAIN;
> for (z = pgdat->nr_zones - 1; z >= 0; z--) {
> if (managed_zone(pgdat->node_zones + z))
> break;
Hi David
I tested with this patch . The issue is resolved. I am not seeing the
kernel panic.
I also tested the page migration. It working fine.
numa_pte_updates 1262330
numa_huge_pte_updates 0
numa_hint_faults 925797
numa_hint_faults_local 3780
numa_pages_migrated 327930
pgmigrate_success 822530
Thanks
Donet
>
> base-commit: 4b17ce353e02b47b00e2fe87b862f09e8b9a47e6
Powered by blists - more mailing lists