[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <512764a4-611c-42d4-8b4a-2aaca4e519a4@gmail.com>
Date: Tue, 26 Aug 2025 17:47:47 +0400
From: Giorgi Tchankvetadze <giorgitchankvetadze1997@...il.com>
To: donettom@...ux.ibm.com
Cc: aboorvad@...ux.ibm.com, akpm@...ux-foundation.org,
chengming.zhou@...ux.dev, david@...hat.com, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, richard.weiyang@...il.com, ritesh.list@...il.com,
xu.xin16@....com.cn
Subject: Re: [PATCH 1/2] mm/ksm: Reset KSM counters in mm_struct during fork
What if we only allocate KSM stats when a process actually uses KSM?
struct ksm_mm_stats {
atomic_long_t merging_pages;
atomic_long_t rmap_items;
atomic_long_t zero_pages;
};
struct ksm_mm_stats *mm->ksm_stats; // NULL until first enter
static inline struct ksm_mm_stats *mm_get_ksm_stats(struct mm_struct *mm)
{
if (likely(mm->ksm_stats))
return mm->ksm_stats;
return ksm_alloc_stats_if_needed(mm); // Slow path
}
On 8/26/2025 4:49 PM, Donet Tom wrote:
> Currently, the KSM-related counters in `mm_struct` such as
> `ksm_merging_pages`, `ksm_rmap_items`, and `ksm_zero_pages` are
> inherited by the child process during fork. This results in
> incorrect accounting, since the child has not performed any
> KSM page merging.
>
> To fix this, reset these counters to 0 in the newly created
> `mm_struct` during fork. This ensures that KSM statistics
> remain accurate and only reflect the activity of each process.
>
> Signed-off-by: Donet Tom <donettom@...ux.ibm.com>
> ---
> include/linux/ksm.h | 6 +++++-
> 1 filechanged <https://lore.kernel.org/linux-mm/2e662107e01417bf9af23bc7f52863cd538419be.1756211338.git.donettom@linux.ibm.com/#related>, 5 insertions(+), 1 deletion(-)
>
> diff --git a/include/linux/ksm.h b/include/linux/ksm.h index
> 22e67ca7cba3..61b8892c632b 100644 --- a/include/linux/ksm.h +++ b/
> include/linux/ksm.h @@ -56,8 +56,12 @@ static inline long
> mm_ksm_zero_pages(struct mm_struct *mm) static inline void ksm_fork(struct mm_struct *mm, struct mm_struct *oldmm)
> {
> /* Adding mm to ksm is best effort on fork. */
> - if (mm_flags_test(MMF_VM_MERGEABLE, oldmm)) + if
> (mm_flags_test(MMF_VM_MERGEABLE, oldmm)) { + mm->ksm_merging_pages = 0;
> + mm->ksm_rmap_items = 0; + atomic_long_set(&mm->ksm_zero_pages, 0); __ksm_enter(mm);
> + } }
>
> static inline int ksm_execve(struct mm_struct *mm)
> --
> 2.51.0
>
>
Powered by blists - more mailing lists