lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 3 Aug 2023 15:50:25 +0200
From:   David Hildenbrand <david@...hat.com>
To:     Ryan Roberts <ryan.roberts@....com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Matthew Wilcox <willy@...radead.org>,
        Yin Fengwei <fengwei.yin@...el.com>,
        Yu Zhao <yuzhao@...gle.com>, Yang Shi <shy828301@...il.com>,
        "Huang, Ying" <ying.huang@...el.com>, Zi Yan <ziy@...dia.com>,
        Nathan Chancellor <nathan@...nel.org>,
        Alexander Gordeev <agordeev@...ux.ibm.com>,
        Gerald Schaefer <gerald.schaefer@...ux.ibm.com>
Cc:     linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCH v4 3/3] mm: Batch-zap large anonymous folio PTE mappings

On 03.08.23 15:38, David Hildenbrand wrote:
> On 27.07.23 16:18, Ryan Roberts wrote:
>> This allows batching the rmap removal with folio_remove_rmap_range(),
>> which means we avoid spuriously adding a partially unmapped folio to the
>> deferred split queue in the common case, which reduces split queue lock
>> contention.
>>
>> Previously each page was removed from the rmap individually with
>> page_remove_rmap(). If the first page belonged to a large folio, this
>> would cause page_remove_rmap() to conclude that the folio was now
>> partially mapped and add the folio to the deferred split queue. But
>> subsequent calls would cause the folio to become fully unmapped, meaning
>> there is no value to adding it to the split queue.
>>
>> A complicating factor is that for platforms where MMU_GATHER_NO_GATHER
>> is enabled (e.g. s390), __tlb_remove_page() drops a reference to the
>> page. This means that the folio reference count could drop to zero while
>> still in use (i.e. before folio_remove_rmap_range() is called). This
>> does not happen on other platforms because the actual page freeing is
>> deferred.
>>
>> Solve this by appropriately getting/putting the folio to guarrantee it
>> does not get freed early. Given the need to get/put the folio in the
>> batch path, we stick to the non-batched path if the folio is not large.
>> While the batched path is functionally correct for a folio with 1 page,
>> it is unlikely to be as efficient as the existing non-batched path in
>> this case.
>>
>> Signed-off-by: Ryan Roberts <ryan.roberts@....com>
>> ---
>>    mm/memory.c | 132 ++++++++++++++++++++++++++++++++++++++++++++++++++++
>>    1 file changed, 132 insertions(+)
>>
>> diff --git a/mm/memory.c b/mm/memory.c
>> index 01f39e8144ef..d35bd8d2b855 100644
>> --- a/mm/memory.c
>> +++ b/mm/memory.c
>> @@ -1391,6 +1391,99 @@ zap_install_uffd_wp_if_needed(struct vm_area_struct *vma,
>>    	pte_install_uffd_wp_if_needed(vma, addr, pte, pteval);
>>    }
>>    
>> +static inline unsigned long page_cont_mapped_vaddr(struct page *page,
>> +				struct page *anchor, unsigned long anchor_vaddr)
>> +{
>> +	unsigned long offset;
>> +	unsigned long vaddr;
>> +
>> +	offset = (page_to_pfn(page) - page_to_pfn(anchor)) << PAGE_SHIFT;
>> +	vaddr = anchor_vaddr + offset;
>> +
>> +	if (anchor > page) {
>> +		if (vaddr > anchor_vaddr)
>> +			return 0;
>> +	} else {
>> +		if (vaddr < anchor_vaddr)
>> +			return ULONG_MAX;
>> +	}
>> +
>> +	return vaddr;
>> +}
>> +
>> +static int folio_nr_pages_cont_mapped(struct folio *folio,
>> +				      struct page *page, pte_t *pte,
>> +				      unsigned long addr, unsigned long end)
>> +{
>> +	pte_t ptent;
>> +	int floops;
>> +	int i;
>> +	unsigned long pfn;
>> +	struct page *folio_end;
>> +
>> +	if (!folio_test_large(folio))
>> +		return 1;
>> +
>> +	folio_end = &folio->page + folio_nr_pages(folio);
>> +	end = min(page_cont_mapped_vaddr(folio_end, page, addr), end);
>> +	floops = (end - addr) >> PAGE_SHIFT;
>> +	pfn = page_to_pfn(page);
>> +	pfn++;
>> +	pte++;
>> +
>> +	for (i = 1; i < floops; i++) {
>> +		ptent = ptep_get(pte);
>> +
>> +		if (!pte_present(ptent) || pte_pfn(ptent) != pfn)
>> +			break;
>> +
>> +		pfn++;
>> +		pte++;
>> +	}
>> +
>> +	return i;
>> +}
>> +
>> +static unsigned long try_zap_anon_pte_range(struct mmu_gather *tlb,
>> +					    struct vm_area_struct *vma,
>> +					    struct folio *folio,
>> +					    struct page *page, pte_t *pte,
>> +					    unsigned long addr, int nr_pages,
>> +					    struct zap_details *details)
>> +{
>> +	struct mm_struct *mm = tlb->mm;
>> +	pte_t ptent;
>> +	bool full;
>> +	int i;
>> +
>> +	/* __tlb_remove_page may drop a ref; prevent going to 0 while in use. */
>> +	folio_get(folio);
> 
> Is there no way around that? It feels wrong and IMHO a bit ugly.
> 
> With this patch, you'll might suddenly have mapcount > refcount for a
> folio, or am I wrong?

Thinking about it, Maybe we should really find a way to keep the current 
logic flow unmodified:

1) ptep_get_and_clear_full()
2) tlb_remove_tlb_entry()
3) page_remove_rmap()
4) __tlb_remove_page()

For example, one loop to handle 1) and 2); and another one to handle 4).

This will need a way to query for the first loop how often we can call 
__tlb_remove_page() before we need a flush.

The simple answer would be "batch->max - batch->nr". tlb_next_batch() 
makes exceeding that a bit harder, maybe it's not really required.

-- 
Cheers,

David / dhildenb

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ