lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f26a6ac2-48a9-4bae-89b9-a3f9b97ae9dc@redhat.com>
Date: Tue, 4 Jun 2024 14:43:50 +0200
From: David Hildenbrand <david@...hat.com>
To: Usama Arif <usamaarif642@...il.com>, akpm@...ux-foundation.org
Cc: hannes@...xchg.org, willy@...radead.org, yosryahmed@...gle.com,
 nphamcs@...il.com, chengming.zhou@...ux.dev, linux-mm@...ck.org,
 linux-kernel@...r.kernel.org, kernel-team@...a.com
Subject: Re: [PATCH v2 1/2] mm: clear pte for folios that are zero filled

On 04.06.24 14:30, David Hildenbrand wrote:
> On 04.06.24 12:58, Usama Arif wrote:
>> Approximately 10-20% of pages to be swapped out are zero pages [1].
>> Rather than reading/writing these pages to flash resulting
>> in increased I/O and flash wear, the pte can be cleared for those
>> addresses at unmap time while shrinking folio list. When this
>> causes a page fault, do_pte_missing will take care of this page.
>> With this patch, NVMe writes in Meta server fleet decreased
>> by almost 10% with conventional swap setup (zswap disabled).
>>
>> [1] https://lore.kernel.org/all/20171018104832epcms5p1b2232e2236258de3d03d1344dde9fce0@epcms5p1/
>>
>> Signed-off-by: Usama Arif <usamaarif642@...il.com>
>> ---
>>    include/linux/rmap.h |   1 +
>>    mm/rmap.c            | 163 ++++++++++++++++++++++---------------------
>>    mm/vmscan.c          |  89 ++++++++++++++++-------
>>    3 files changed, 150 insertions(+), 103 deletions(-)
>>
>> diff --git a/include/linux/rmap.h b/include/linux/rmap.h
>> index bb53e5920b88..b36db1e886e4 100644
>> --- a/include/linux/rmap.h
>> +++ b/include/linux/rmap.h
>> @@ -100,6 +100,7 @@ enum ttu_flags {
>>    					 * do a final flush if necessary */
>>    	TTU_RMAP_LOCKED		= 0x80,	/* do not grab rmap lock:
>>    					 * caller holds it */
>> +	TTU_ZERO_FOLIO		= 0x100,/* zero folio */
>>    };
>>    
>>    #ifdef CONFIG_MMU
>> diff --git a/mm/rmap.c b/mm/rmap.c
>> index 52357d79917c..d98f70876327 100644
>> --- a/mm/rmap.c
>> +++ b/mm/rmap.c
>> @@ -1819,96 +1819,101 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
>>    			 */
>>    			dec_mm_counter(mm, mm_counter(folio));
>>    		} else if (folio_test_anon(folio)) {
>> -			swp_entry_t entry = page_swap_entry(subpage);
>> -			pte_t swp_pte;
>> -			/*
>> -			 * Store the swap location in the pte.
>> -			 * See handle_pte_fault() ...
>> -			 */
>> -			if (unlikely(folio_test_swapbacked(folio) !=
>> -					folio_test_swapcache(folio))) {
>> +			if (flags & TTU_ZERO_FOLIO) {
>> +				pte_clear(mm, address, pvmw.pte);
>> +				dec_mm_counter(mm, MM_ANONPAGES);
> 
> Is there an easy way to reduce the code churn and highlight the added code?
> 
> Like
> 
> } else if (folio_test_anon(folio) && (flags & TTU_ZERO_FOLIO)) {
> 
> } else if (folio_test_anon(folio)) {
> 
> 
> 
> Also to concerns that I want to spell out:
> 
> (a) what stops the page from getting modified in the meantime? The CPU
>       can write it until the TLB was flushed.
> 
> (b) do you properly handle if the page is pinned (or just got pinned)
>       and we must not discard it?

Oh, and I forgot, are you handling userfaultd as expected? IIRC there 
are some really nasty side-effects with userfaultfd even when 
userfaultfd is currently not registered for a VMA [1].

[1] 
https://lore.kernel.org/linux-mm/3a4b1027-df6e-31b8-b0de-ff202828228d@redhat.com/

What should work is replacing all-zero anonymous pages by the shared 
zeropage iff the anonymous page is not pinned and we synchronize against 
GUP fast. Well, and we handle possible concurrent writes accordingly.

KSM does essentially that when told to de-duplicate the shared zeropage, 
and I was thinking a while ago if we would want a zeropage-only KSM 
version that doesn't need stable tress and all that, but only 
deduplicates zero-filled pages into the shared zeropage in a safe way.

-- 
Cheers,

David / dhildenb


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ