[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87bkn3rw8v.fsf@yhuang6-desk2.ccr.corp.intel.com>
Date: Fri, 13 Jan 2023 10:42:08 +0800
From: "Huang, Ying" <ying.huang@...el.com>
To: Mike Kravetz <mike.kravetz@...cle.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, Zi Yan <ziy@...dia.com>,
Yang Shi <shy828301@...il.com>,
Baolin Wang <baolin.wang@...ux.alibaba.com>,
Oscar Salvador <osalvador@...e.de>,
Matthew Wilcox <willy@...radead.org>,
Bharata B Rao <bharata@....com>,
Alistair Popple <apopple@...dia.com>,
haoxin <xhao@...ux.alibaba.com>
Subject: Re: [PATCH -v2 0/9] migrate_pages(): batch TLB flushing
Mike Kravetz <mike.kravetz@...cle.com> writes:
> On 01/12/23 15:17, Huang, Ying wrote:
>> Mike Kravetz <mike.kravetz@...cle.com> writes:
>>
>> > On 01/12/23 08:09, Huang, Ying wrote:
>> >> Hi, Mike,
>> >>
>> >> Mike Kravetz <mike.kravetz@...cle.com> writes:
>> >>
>> >> > On 01/10/23 17:53, Mike Kravetz wrote:
>> >> >> Just saw the following easily reproducible issue on next-20230110. Have not
>> >> >> verified it is related to/caused by this series, but it looks suspicious.
>> >> >
>> >> > Verified this is caused by the series,
>> >> >
>> >> > 734cbddcfe72 migrate_pages: organize stats with struct migrate_pages_stats
>> >> > to
>> >> > 323b933ba062 migrate_pages: batch flushing TLB
>> >> >
>> >> > in linux-next.
>> >>
>> >> Thanks for reporting.
>> >>
>> >> I tried this yesterday (next-20230111), but failed to reproduce it. Can
>> >> you share your kernel config? Is there any other setup needed?
>> >
>> > Config file is attached.
>>
>> Thanks!
>>
>> > Are you writing a REALLY big value to nr_hugepages? By REALLY big I
>> > mean a value that is impossible to fulfill. This will result in
>> > successful hugetlb allocations until __alloc_pages starts to fail. At
>> > this point we will be stressing compaction/migration trying to find more
>> > contiguous pages.
>> >
>> > Not sure if it matters, but I am running on a 2 node VM. The 2 nodes
>> > may be important as the hugetlb allocation code will try a little harder
>> > alternating between nodes that may perhaps stress compaction/migration
>> > more.
>>
>> Tried again on a 2-node machine. Still cannot reproduce it.
>>
>> >> BTW: can you bisect to one specific commit which causes the bug in the
>> >> series?
>> >
>> > I should have some time to isolate in the next day or so.
>
> Isolated to patch,
> [PATCH -v2 4/9] migrate_pages: split unmap_and_move() to _unmap() and _move()
>
> Actually, recreated/isolated by just applying this series to v6.2-rc3 in an
> effort to eliminate any possible noise in linux-next.
>
> Spent a little time looking at modifications made there, but nothing stood out.
> Will investigate more as time allows.
Thank you very much! That's really helpful.
Checked that patch again, found that there's an issue about non-lru
pages. Do you enable zram in your test system? Can you try the
below debug patch on top of
[PATCH -v2 4/9] migrate_pages: split unmap_and_move() to _unmap() and _move()
The following patches in series need to be rebased for the below
change. I will test with zram enabled too.
Best Regards,
Huang, Ying
---------------------------8<------------------------------------------------------
diff --git a/mm/migrate.c b/mm/migrate.c
index 4c35c2a49574..7153d954b8a2 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -1187,10 +1187,13 @@ static int __migrate_folio_move(struct folio *src, struct folio *dst,
int rc;
int page_was_mapped = 0;
struct anon_vma *anon_vma = NULL;
+ bool is_lru = !__PageMovable(&src->page);
__migrate_folio_extract(dst, &page_was_mapped, &anon_vma);
rc = move_to_new_folio(dst, src, mode);
+ if (!unlikely(is_lru))
+ goto out_unlock_both;
/*
* When successful, push dst to LRU immediately: so that if it
@@ -1211,6 +1214,7 @@ static int __migrate_folio_move(struct folio *src, struct folio *dst,
remove_migration_ptes(src,
rc == MIGRATEPAGE_SUCCESS ? dst : src, false);
+out_unlock_both:
folio_unlock(dst);
/* Drop an anon_vma reference if we took one */
if (anon_vma)
Powered by blists - more mailing lists