lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3d0ebaf0-894c-8238-c18b-92d624dc39a6@redhat.com>
Date:   Thu, 3 Aug 2023 15:57:11 +0200
From:   David Hildenbrand <david@...hat.com>
To:     Yu Zhao <yuzhao@...gle.com>, Ryan Roberts <ryan.roberts@....com>,
        Matthew Wilcox <willy@...radead.org>
Cc:     Andrew Morton <akpm@...ux-foundation.org>,
        Yin Fengwei <fengwei.yin@...el.com>,
        Yang Shi <shy828301@...il.com>,
        "Huang, Ying" <ying.huang@...el.com>, Zi Yan <ziy@...dia.com>,
        Nathan Chancellor <nathan@...nel.org>,
        Alexander Gordeev <agordeev@...ux.ibm.com>,
        Gerald Schaefer <gerald.schaefer@...ux.ibm.com>,
        linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCH v4 3/3] mm: Batch-zap large anonymous folio PTE mappings

On 27.07.23 19:22, Yu Zhao wrote:
> On Thu, Jul 27, 2023 at 8:18 AM Ryan Roberts <ryan.roberts@....com> wrote:
>>
>> This allows batching the rmap removal with folio_remove_rmap_range(),
>> which means we avoid spuriously adding a partially unmapped folio to the
>> deferred split queue in the common case, which reduces split queue lock
>> contention.
>>
>> Previously each page was removed from the rmap individually with
>> page_remove_rmap(). If the first page belonged to a large folio, this
>> would cause page_remove_rmap() to conclude that the folio was now
>> partially mapped and add the folio to the deferred split queue. But
>> subsequent calls would cause the folio to become fully unmapped, meaning
>> there is no value to adding it to the split queue.
>>
>> A complicating factor is that for platforms where MMU_GATHER_NO_GATHER
>> is enabled (e.g. s390), __tlb_remove_page() drops a reference to the
>> page. This means that the folio reference count could drop to zero while
>> still in use (i.e. before folio_remove_rmap_range() is called). This
>> does not happen on other platforms because the actual page freeing is
>> deferred.
>>
>> Solve this by appropriately getting/putting the folio to guarrantee it
>> does not get freed early. Given the need to get/put the folio in the
>> batch path, we stick to the non-batched path if the folio is not large.
>> While the batched path is functionally correct for a folio with 1 page,
>> it is unlikely to be as efficient as the existing non-batched path in
>> this case.
>>
>> Signed-off-by: Ryan Roberts <ryan.roberts@....com>
> 
> This ad hoc patch looks unacceptable to me: we can't afford to keep
> adding special cases.
> 
> I vote for completely converting zap_pte_range() to use
> folio_remove_rmap_range(), and that includes tlb_flush_rmap_batch()
> and other types of large folios, not just anon. Otherwise I'll leave
> it to Matthew and David.

The !anon case with tlb_delay_rmap() really hurts my brain (again).

We're already special-casing on !anon ... so splitting anon and !anon 
also doesn't sound completely off to me.

But yes, we should find ways to just avoid any special casing here, or 
at least minimize them.

(The bigger problem I have with this patch, as raised in my replies, is 
that it messes up the order in which we adjust mapcount+refcount, and I 
am *really* not a friend of that :) )

-- 
Cheers,

David / dhildenb

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ