[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <7140E1D7-B1B9-4462-ADDA-E313A7A90A68@gmail.com>
Date: Thu, 10 Nov 2022 13:09:43 -0800
From: Nadav Amit <nadav.amit@...il.com>
To: Mike Kravetz <mike.kravetz@...cle.com>
Cc: Linux-MM <linux-mm@...ck.org>,
kernel list <linux-kernel@...r.kernel.org>,
Naoya Horiguchi <naoya.horiguchi@...ux.dev>,
David Hildenbrand <david@...hat.com>,
Axel Rasmussen <axelrasmussen@...gle.com>,
Mina Almasry <almasrymina@...gle.com>,
Peter Xu <peterx@...hat.com>, Rik van Riel <riel@...riel.com>,
Vlastimil Babka <vbabka@...e.cz>,
Matthew Wilcox <willy@...radead.org>,
Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [PATCH v8 2/2] mm: remove zap_page_range and change callers to
use zap_vma_range
On Nov 7, 2022, at 5:19 PM, Mike Kravetz <mike.kravetz@...cle.com> wrote:
> zap_page_range was originally designed to unmap pages within an address
> range that could span multiple vmas. However, today all callers of
> zap_page_range pass a range entirely within a single vma. In addition,
> the mmu notification call within zap_page range is not correct as it
> should be vma specific.
>
> Instead of fixing zap_page_range, change all callers to use zap_vma_range
> as it is designed for ranges within a single vma.
I understand the argument about mmu notifiers being broken (which is of
course fixable).
But, are the callers really able to guarantee that the ranges are all in a
single VMA? I am not familiar with the users, but how for instance
tcp_zerocopy_receive() can guarantee that no one did some mprotect() of some
sorts that caused the original VMA to be split?
Powered by blists - more mailing lists