lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <533a7c3d-3a48-b16b-b421-6e8386e0b142@redhat.com>
Date:   Mon, 3 Apr 2023 17:20:22 +0200
From:   David Hildenbrand <david@...hat.com>
To:     xu xin <xu.xin.sc@...il.com>
Cc:     akpm@...ux-foundation.org, imbrenda@...ux.ibm.com,
        jiang.xuexin@....com.cn, linux-kernel@...r.kernel.org,
        linux-mm@...ck.org, ran.xiaokai@....com.cn, xu.xin16@....com.cn,
        yang.yang29@....com.cn
Subject: Re: [PATCH v6 0/6] ksm: support tracking KSM-placed zero-pages

On 30.03.23 14:06, xu xin wrote:
> Hi, I'm sorry to reply so late because I was so busy with my job matters recently.
> 
> I appreciate David's idea of simplifying the implement of tracking KSM-placed zero pages.
> But I'm confused with how to implement that via pte_mkdirty/pte_dirty without affecting
> other functions now and in the future.

No need to worry about too much about the future here :)

> 
>>
>> I already shared some feedback in [1]. I think we should try to simplify
>> this handling, as proposed in that mail. Still waiting for a reply.
>>
>> [1]
>> https://lore.kernel.org/all/9d7a8be3-ee9e-3492-841b-a0af9952ef36@redhat.com/
> 
> I have some questions about using pte_mkdirty to mark KSM-placed zero pages.
> 
> (1) Will KSM using pte_mkdirty to mark KSM-placed zero pages collides with the existing
>      handling of the same pte in other featutes? And in the future, what if there are new
>      codes also using pte_mkdirty for other goals.

So far I am not aware of other users of the dirty bit for the shared zeropage. If ever
required (why?) we could try finding another PTE bit. Or use a completely separate set
of zeropages, if ever really running out of PTE bits.

I selected pte_dirty() because it's available on all architectures and should be unused
on the shared zeropage (always clean).

Until then, we only have to worry about architectures that treat R/O dirty PTEs as writable
(I only know sparc64), maybe a good justification to finally fix sparc64 and identify others.
Again, happy to help here. [1]

> 
> (2) Can the literal meaning of pte_mkdiry represents a pte that points to ksm zero page?

I briefly scanned the code. pte_dirty() should mostly not matter for the shared zeropage.
We have to double check (will do as well).

> 
> (3) Suppose we use the pte_mkdirty approach, how to update/decline the count of ksm_zero_pages
>      when upper app writting on the page triggers COW(Copy on Write)? In *mm_fault outside
>      mm/ksm.c ?

yes. Do it synchronously when unmapping the shared zeropage.


diff --git a/mm/memory.c b/mm/memory.c
index f456f3b5049c..78b6c60602dd 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -1351,6 +1351,8 @@ zap_install_uffd_wp_if_needed(struct vm_area_struct *vma,
         pte_install_uffd_wp_if_needed(vma, addr, pte, pteval);
  }
  
+#define is_ksm_zero_pte(pte) (is_zero_pfn(pte_pfn(pte)) && pte_dirty(pte))
+
  static unsigned long zap_pte_range(struct mmu_gather *tlb,
                                 struct vm_area_struct *vma, pmd_t *pmd,
                                 unsigned long addr, unsigned long end,
@@ -1392,8 +1394,11 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb,
                         tlb_remove_tlb_entry(tlb, pte, addr);
                         zap_install_uffd_wp_if_needed(vma, addr, pte, details,
                                                       ptent);
-                       if (unlikely(!page))
+                       if (unlikely(!page)) {
+                               if (is_ksm_zero_pte(ptent))
+                                       /* TODO: adjust counter */
                                 continue;
+                       }
  
                         delay_rmap = 0;
                         if (!PageAnon(page)) {
@@ -3111,6 +3116,8 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf)
                                 inc_mm_counter(mm, MM_ANONPAGES);
                         }
                 } else {
+                       if (is_ksm_zero_pte(orig_pte))
+                               /* TODO: adjust counter */
                         inc_mm_counter(mm, MM_ANONPAGES);
                 }
                 flush_cache_page(vma, vmf->address, pte_pfn(vmf->orig_pte));


The nice thing is, if we get it wrong we "only" get wrong counters.

A prototype for that should be fairly simple -- to see what we're missing.

> 
> 
> Move the previos message here to reply together.
>> The problem with this approach I see is that it fundamentally relies on
>> the rmap/stable-tree to detect whether a zeropage was placed or not.
>>
>> I was wondering, why we even need an rmap item *at all* anymore. Why
>> can't we place the shared zeropage an call it a day (remove the rmap
>> item)? Once we placed a shared zeropage, the next KSM scan should better
>> just ignore it, it's already deduplicated.
> 
> The reason is as follows ...
> Initially, all scanned pages by ksmd will be assigned a rmap_item storing the page
> information and ksm information, which helps ksmd can know every scanned pages' status and
> update all counts especialy when COW happens. But since use_zero_pages feature was merged,
> the situation changed, ksm zero pages is the only exception of ksm-scanned page without owning
> a rmap_item in KSM, which leads to ksmd even don't know the existing of KSM-placed, and thus
> causes the problem of our patches aimed to solve.
> 

Understood, so per-PTE information would similarly work.


[1] https://lkml.kernel.org/r/20221212130213.136267-1-david@redhat.com

-- 
Thanks,

David / dhildenb

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ