[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3e61fb01-5277-80a4-610e-0608475637f8@redhat.com>
Date: Wed, 9 Nov 2022 11:40:01 +0100
From: David Hildenbrand <david@...hat.com>
To: xu.xin.sc@...il.com, akpm@...ux-foundation.org
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
xu xin <xu.xin16@....com.cn>,
Claudio Imbrenda <imbrenda@...ux.ibm.com>,
Xuexin Jiang <jiang.xuexin@....com.cn>,
Xiaokai Ran <ran.xiaokai@....com.cn>,
Yang Yang <yang.yang29@....com.cn>
Subject: Re: [PATCH v3 2/5] ksm: support unsharing zero pages placed by KSM
On 21.10.22 14:54, David Hildenbrand wrote:
> On 21.10.22 12:17, David Hildenbrand wrote:
>> On 11.10.22 04:22, xu.xin.sc@...il.com wrote:
>>> From: xu xin <xu.xin16@....com.cn>
>>>
>>> use_zero_pages may be very useful, not just because of cache colouring
>>> as described in doc, but also because use_zero_pages can accelerate
>>> merging empty pages when there are plenty of empty pages (full of zeros)
>>> as the time of page-by-page comparisons (unstable_tree_search_insert) is
>>> saved.
>>>
>>> But when enabling use_zero_pages, madvise(addr, len, MADV_UNMERGEABLE) and
>>> other ways (like write 2 to /sys/kernel/mm/ksm/run) to trigger unsharing
>>> will *not* unshare the shared zeropage as placed by KSM (which may be
>>> against the MADV_UNMERGEABLE documentation at least).
>>>
>>> To not blindly unshare all shared zero_pages in applicable VMAs, the patch
>>> introduces a dedicated flag ZERO_PAGE_FLAG to mark the rmap_items of those
>>> shared zero_pages. and guarantee that these rmap_items will be not freed
>>> during the time of zero_pages not being writing, so we can only unshare
>>> the *KSM-placed* zero_pages.
>>>
>>> The patch will not degrade the performance of use_zero_pages as it doesn't
>>> change the way of merging empty pages in use_zero_pages's feature.
>>>
>>> Fixes: e86c59b1b12d ("mm/ksm: improve deduplication of zero pages with colouring")
>>> Reported-by: David Hildenbrand <david@...hat.com>
>>> Cc: Claudio Imbrenda <imbrenda@...ux.ibm.com>
>>> Cc: Xuexin Jiang <jiang.xuexin@....com.cn>
>>> Signed-off-by: xu xin <xu.xin16@....com.cn>
>>> Co-developed-by: Xiaokai Ran <ran.xiaokai@....com.cn>
>>> Signed-off-by: Xiaokai Ran <ran.xiaokai@....com.cn>
>>> Co-developed-by: Yang Yang <yang.yang29@....com.cn>
>>> Signed-off-by: Yang Yang <yang.yang29@....com.cn>
>>> Signed-off-by: xu xin <xu.xin16@....com.cn>
>>> ---
>>> mm/ksm.c | 136 ++++++++++++++++++++++++++++++++++++++++++-------------
>>> 1 file changed, 105 insertions(+), 31 deletions(-)
>>>
>>> diff --git a/mm/ksm.c b/mm/ksm.c
>>> index 13c60f1071d8..e351d7b6d15e 100644
>>> --- a/mm/ksm.c
>>> +++ b/mm/ksm.c
>>> @@ -213,6 +213,7 @@ struct ksm_rmap_item {
>>> #define SEQNR_MASK 0x0ff /* low bits of unstable tree seqnr */
>>> #define UNSTABLE_FLAG 0x100 /* is a node of the unstable tree */
>>> #define STABLE_FLAG 0x200 /* is listed from the stable tree */
>>> +#define ZERO_PAGE_FLAG 0x400 /* is zero page placed by KSM */
>>>
>>> /* The stable and unstable tree heads */
>>> static struct rb_root one_stable_tree[1] = { RB_ROOT };
>>> @@ -381,14 +382,6 @@ static inline struct ksm_rmap_item *alloc_rmap_item(void)
>>> return rmap_item;
>>> }
>>>
>>> -static inline void free_rmap_item(struct ksm_rmap_item *rmap_item)
>>> -{
>>> - ksm_rmap_items--;
>>> - rmap_item->mm->ksm_rmap_items--;
>>> - rmap_item->mm = NULL; /* debug safety */
>>> - kmem_cache_free(rmap_item_cache, rmap_item);
>>> -}
>>> -
>>> static inline struct ksm_stable_node *alloc_stable_node(void)
>>> {
>>> /*
>>> @@ -420,7 +413,8 @@ static inline bool ksm_test_exit(struct mm_struct *mm)
>>> }
>>>
>>> /*
>>> - * We use break_ksm to break COW on a ksm page: it's a stripped down
>>> + * We use break_ksm to break COW on a ksm page or KSM-placed zero page (only
>>> + * happen when enabling use_zero_pages): it's a stripped down
>>> *
>>> * if (get_user_pages(addr, 1, FOLL_WRITE, &page, NULL) == 1)
>>> * put_page(page);
>>> @@ -434,7 +428,8 @@ static inline bool ksm_test_exit(struct mm_struct *mm)
>>> * of the process that owns 'vma'. We also do not want to enforce
>>> * protection keys here anyway.
>>> */
>>> -static int break_ksm(struct vm_area_struct *vma, unsigned long addr)
>>> +static int break_ksm(struct vm_area_struct *vma, unsigned long addr,
>>> + bool ksm_check_bypass)
>>> {
>>> struct page *page;
>>> vm_fault_t ret = 0;
>>> @@ -449,6 +444,16 @@ static int break_ksm(struct vm_area_struct *vma, unsigned long addr)
>>> ret = handle_mm_fault(vma, addr,
>>> FAULT_FLAG_WRITE | FAULT_FLAG_REMOTE,
>>> NULL);
>>> + else if (ksm_check_bypass && is_zero_pfn(page_to_pfn(page))) {
>>> + /*
>>> + * Although it's not ksm page, it's zero page as placed by
>>> + * KSM use_zero_page, so we should unshare it when
>>> + * ksm_check_bypass is true.
>>> + */
>>> + ret = handle_mm_fault(vma, addr,
>>> + FAULT_FLAG_WRITE | FAULT_FLAG_REMOTE,
>>> + NULL);
>>> + }
>>
>> Please don't duplicate that page fault triggering code.
>>
>> Also, please be aware that this collides with
>>
>> https://lkml.kernel.org/r/20221021101141.84170-1-david@redhat.com
>>
>> Adjustments should be comparatively easy.
>
> ... except that I'm still working on FAULT_FLAG_UNSHARE support for the
> shared zeropage. That will be posted soonish (within next 2 weeks).
>
Posted: https://lkml.kernel.org/r/20221107161740.144456-1-david@redhat.com
With that, we can use FAULT_FLAG_UNSHARE also to break COW on the shared
zeropage.
--
Thanks,
David / dhildenb
Powered by blists - more mailing lists