[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <b451b318-133b-45f6-87b3-3dc3fa1f75a8@linux.ibm.com>
Date: Wed, 27 Aug 2025 23:39:13 +0530
From: Donet Tom <donettom@...ux.ibm.com>
To: David Hildenbrand <david@...hat.com>,
Giorgi Tchankvetadze <giorgitchankvetadze1997@...il.com>
Cc: aboorvad@...ux.ibm.com, akpm@...ux-foundation.org,
chengming.zhou@...ux.dev, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, richard.weiyang@...il.com, ritesh.list@...il.com,
xu.xin16@....com.cn
Subject: Re: [PATCH 1/2] mm/ksm: Reset KSM counters in mm_struct during fork
On 8/26/25 7:39 PM, David Hildenbrand wrote:
> On 26.08.25 15:47, Giorgi Tchankvetadze wrote:
>> What if we only allocate KSM stats when a process actually uses KSM?
>>
>> struct ksm_mm_stats {
>> atomic_long_t merging_pages;
>> atomic_long_t rmap_items;
>> atomic_long_t zero_pages;
>> };
>> struct ksm_mm_stats *mm->ksm_stats; // NULL until first enter
>>
>> static inline struct ksm_mm_stats *mm_get_ksm_stats(struct mm_struct
>> *mm)
>> {
>> if (likely(mm->ksm_stats))
>> return mm->ksm_stats;
>> return ksm_alloc_stats_if_needed(mm); // Slow path
>> }
>
> The fork'ed child uses KSM. It just doesn't have any stable rmap entries.
>
> We have to copy the zero_pages counter such that
> ksm_might_unmap_zero_page() will do the right thing.
>
> But you're comment made me realize that there is likely another bug:
>
> When copying zero_pages during fork(), we have to increment
> &ksm_zero_pages as well. Otherwise we will get an underflow later.
Yes, David, you are right. I added a test to check this scenario, and I
am seeing ksm_zero_pages go negative.
# cat /sys/kernel/mm/ksm/ksm_zero_pages
-128
#
>
> @Donet, can you look into that as well?
Sure, I will add a fix for this issue in the next version.
Powered by blists - more mailing lists