[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAOUHufbbHWLD1uobPD2L17+YD3y-dFvGy-8kr-c9CkYDHLiEPg@mail.gmail.com>
Date: Tue, 1 Aug 2023 01:12:34 -0600
From: Yu Zhao <yuzhao@...gle.com>
To: Ryan Roberts <ryan.roberts@....com>
Cc: Matthew Wilcox <willy@...radead.org>,
David Hildenbrand <david@...hat.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Yin Fengwei <fengwei.yin@...el.com>,
Yang Shi <shy828301@...il.com>,
"Huang, Ying" <ying.huang@...el.com>, Zi Yan <ziy@...dia.com>,
Nathan Chancellor <nathan@...nel.org>,
Alexander Gordeev <agordeev@...ux.ibm.com>,
Gerald Schaefer <gerald.schaefer@...ux.ibm.com>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCH v4 3/3] mm: Batch-zap large anonymous folio PTE mappings
On Fri, Jul 28, 2023 at 3:16 AM Ryan Roberts <ryan.roberts@....com> wrote:
>
> On 27/07/2023 18:22, Yu Zhao wrote:
> > On Thu, Jul 27, 2023 at 8:18 AM Ryan Roberts <ryan.roberts@....com> wrote:
> >>
> >> This allows batching the rmap removal with folio_remove_rmap_range(),
> >> which means we avoid spuriously adding a partially unmapped folio to the
> >> deferred split queue in the common case, which reduces split queue lock
> >> contention.
> >>
> >> Previously each page was removed from the rmap individually with
> >> page_remove_rmap(). If the first page belonged to a large folio, this
> >> would cause page_remove_rmap() to conclude that the folio was now
> >> partially mapped and add the folio to the deferred split queue. But
> >> subsequent calls would cause the folio to become fully unmapped, meaning
> >> there is no value to adding it to the split queue.
> >>
> >> A complicating factor is that for platforms where MMU_GATHER_NO_GATHER
> >> is enabled (e.g. s390), __tlb_remove_page() drops a reference to the
> >> page. This means that the folio reference count could drop to zero while
> >> still in use (i.e. before folio_remove_rmap_range() is called). This
> >> does not happen on other platforms because the actual page freeing is
> >> deferred.
> >>
> >> Solve this by appropriately getting/putting the folio to guarrantee it
> >> does not get freed early. Given the need to get/put the folio in the
> >> batch path, we stick to the non-batched path if the folio is not large.
> >> While the batched path is functionally correct for a folio with 1 page,
> >> it is unlikely to be as efficient as the existing non-batched path in
> >> this case.
> >>
> >> Signed-off-by: Ryan Roberts <ryan.roberts@....com>
> >
> > This ad hoc patch looks unacceptable to me: we can't afford to keep
> > adding special cases.
> >
> > I vote for completely converting zap_pte_range() to use
> > folio_remove_rmap_range(), and that includes tlb_flush_rmap_batch()
> > and other types of large folios, not just anon.
>
> The intent of the change is to avoid the deferred split queue lock contention
> and this is only a problem for anon folios;
This reasoning seems wrong to me: if the goal was to fix the lock
contention, the fix should have been in deferred_split_folio().
> page cache folios are never split in
> this way.
The goal I see here is to enlighten zap_pte_range() with batch
operations on folios.
> My intention was to do the smallest change to solve the problem.
I understand the desire. But we can't do this at the cost of making
the codebase harder to maintain.
> I
> don't see the value in reworking a much bigger piece of the code, making it more
> complex, when its not going to give any clear perf benefits.
"Much bigger ... more complex": I'm not sure how you get this
impression. Have you tried to do it already or is it just a gut
feeling?
Supporting other types of large folios, not just anon, actually makes
it simpler!
> Otherwise I'll leave
> > it to Matthew and David.
>
> If there is concensus that this is _required_ in order to merge this series,
> then I guess I'll bite the bullet and do it. But my preference is to leave it
> for if/when a reason is found that it is actually bringing benefit.
There is a clear reason here: this patch is *half-baked* because it
doesn't handle tlb_flush_rmap_batch().
Powered by blists - more mailing lists