[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <7048D2B5-5FA5-4F72-8FDC-A02411CFD71D@gmail.com>
Date: Sat, 29 Oct 2022 17:54:44 -0700
From: Nadav Amit <nadav.amit@...il.com>
To: Mike Kravetz <mike.kravetz@...cle.com>
Cc: Peter Xu <peterx@...hat.com>, Linux-MM <linux-mm@...ck.org>,
kernel list <linux-kernel@...r.kernel.org>,
Naoya Horiguchi <naoya.horiguchi@...ux.dev>,
David Hildenbrand <david@...hat.com>,
Axel Rasmussen <axelrasmussen@...gle.com>,
Mina Almasry <almasrymina@...gle.com>,
Rik van Riel <riel@...riel.com>,
Vlastimil Babka <vbabka@...e.cz>,
Matthew Wilcox <willy@...radead.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Wei Chen <harperchen1110@...il.com>, stable@...r.kernel.org
Subject: Re: [PATCH v2] hugetlb: don't delete vma_lock in hugetlb
MADV_DONTNEED processing
On Oct 29, 2022, at 5:15 PM, Mike Kravetz <mike.kravetz@...cle.com> wrote:
> zap_page_range is a bit confusing. It appears that the passed range can
> span multiple vmas. Otherwise, there would be no do while loop. Yet, there
> is only one mmu_notifier_range_init call specifying the passed vma.
>
> It appears all callers pass a range entirely within a single vma.
>
> The modifications above would work for a range within a single vma. However,
> things would be more complicated if the range can indeed span multiple vmas.
> For multiple vmas, we would need to check the first and last vmas for
> pmd sharing.
>
> Anyone know more about this seeming confusing behavior? Perhaps, range
> spanning multiple vmas was left over earlier code?
I don’t have personal knowledge, but I noticed that it does not make much
sense, at least for MADV_DONTNEED. I tried to batch the TLB flushes across
VMAs for madvise’s. [1]
Need to get to it sometime.
[1] https://lore.kernel.org/lkml/20210926161259.238054-7-namit@vmware.com/
Powered by blists - more mailing lists