[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <0785a15e-29fb-4801-9743-3d08e381d506@redhat.com>
Date: Tue, 4 Feb 2025 12:38:31 +0100
From: David Hildenbrand <david@...hat.com>
To: Barry Song <21cnbao@...il.com>, akpm@...ux-foundation.org,
linux-mm@...ck.org
Cc: baolin.wang@...ux.alibaba.com, chrisl@...nel.org, ioworker0@...il.com,
kasong@...cent.com, linux-arm-kernel@...ts.infradead.org,
linux-kernel@...r.kernel.org, linux-riscv@...ts.infradead.org,
lorenzo.stoakes@...cle.com, ryan.roberts@....com, v-songbaohua@...o.com,
x86@...nel.org, ying.huang@...el.com, zhengtangquan@...o.com
Subject: Re: [PATCH v3 3/4] mm: Support batched unmap for lazyfree large
folios during reclamation
Hi,
> unsigned long hsz = 0;
>
> @@ -1780,6 +1800,16 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
> hugetlb_vma_unlock_write(vma);
> }
> pteval = huge_ptep_clear_flush(vma, address, pvmw.pte);
> + } else if (folio_test_large(folio) && !(flags & TTU_HWPOISON) &&
> + can_batch_unmap_folio_ptes(address, folio, pvmw.pte)) {
> + nr_pages = folio_nr_pages(folio);
> + flush_cache_range(vma, range.start, range.end);
> + pteval = get_and_clear_full_ptes(mm, address, pvmw.pte, nr_pages, 0);
> + if (should_defer_flush(mm, flags))
> + set_tlb_ubc_flush_pending(mm, pteval, address,
> + address + folio_size(folio));
> + else
> + flush_tlb_range(vma, range.start, range.end);
> } else {
I have some fixes [1] that will collide with this series. I'm currently
preparing a v2, and am not 100% sure when the fixes will get queued+merged.
I'll base them against mm-stable for now, and send them out based on
that, to avoid the conflicts here (should all be fairly easy to resolve
from a quick glimpse).
So we might have to refresh this series here if the fixes go in first.
[1] https://lkml.kernel.org/r/20250129115411.2077152-1-david@redhat.com
--
Cheers,
David / dhildenb
Powered by blists - more mailing lists