[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y7UuzV94Yo59PwTa@dhcp22.suse.cz>
Date: Wed, 4 Jan 2023 08:46:21 +0100
From: Michal Hocko <mhocko@...e.com>
To: Mike Kravetz <mike.kravetz@...cle.com>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
linuxppc-dev@...ts.ozlabs.org, linux-riscv@...ts.infradead.org,
linux-s390@...r.kernel.org, netdev@...r.kernel.org,
Christoph Hellwig <hch@...radead.org>,
David Hildenbrand <david@...hat.com>,
Peter Xu <peterx@...hat.com>,
Nadav Amit <nadav.amit@...il.com>,
Matthew Wilcox <willy@...radead.org>,
Vlastimil Babka <vbabka@...e.cz>,
Rik van Riel <riel@...riel.com>,
Will Deacon <will@...nel.org>,
Michael Ellerman <mpe@...erman.id.au>,
Palmer Dabbelt <palmer@...belt.com>,
Christian Borntraeger <borntraeger@...ux.ibm.com>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Christian Brauner <brauner@...nel.org>,
Eric Dumazet <edumazet@...gle.com>,
Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [PATCH] mm: remove zap_page_range and create zap_vma_pages
On Tue 03-01-23 16:27:32, Mike Kravetz wrote:
> zap_page_range was originally designed to unmap pages within an address
> range that could span multiple vmas. While working on [1], it was
> discovered that all callers of zap_page_range pass a range entirely within
> a single vma. In addition, the mmu notification call within zap_page
> range does not correctly handle ranges that span multiple vmas. When
> crossing a vma boundary, a new mmu_notifier_range_init/end call pair
> with the new vma should be made.
>
> Instead of fixing zap_page_range, do the following:
> - Create a new routine zap_vma_pages() that will remove all pages within
> the passed vma. Most users of zap_page_range pass the entire vma and
> can use this new routine.
> - For callers of zap_page_range not passing the entire vma, instead call
> zap_page_range_single().
> - Remove zap_page_range.
>
> [1] https://lore.kernel.org/linux-mm/20221114235507.294320-2-mike.kravetz@oracle.com/
> Suggested-by: Peter Xu <peterx@...hat.com>
> Signed-off-by: Mike Kravetz <mike.kravetz@...cle.com>
This looks even better than the previous version.
Acked-by: Michal Hocko <mhocko@...e.com>
minor nit
[...]
> diff --git a/mm/page-writeback.c b/mm/page-writeback.c
> index ad608ef2a243..ffa36cfe5884 100644
> --- a/mm/page-writeback.c
> +++ b/mm/page-writeback.c
> @@ -2713,7 +2713,7 @@ void folio_account_cleaned(struct folio *folio, struct bdi_writeback *wb)
> *
> * The caller must hold lock_page_memcg(). Most callers have the folio
> * locked. A few have the folio blocked from truncation through other
> - * means (eg zap_page_range() has it mapped and is holding the page table
> + * means (eg zap_vma_pages() has it mapped and is holding the page table
> * lock). This can also be called from mark_buffer_dirty(), which I
> * cannot prove is always protected against truncate.
strictly speaking this should be unmap_page_range
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists