[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <20220811160020.1e6823094217e8d6d3aaebdf@linux-foundation.org>
Date: Thu, 11 Aug 2022 16:00:20 -0700
From: Andrew Morton <akpm@...ux-foundation.org>
To: Kefeng Wang <wangkefeng.wang@...wei.com>
Cc: <linux-mm@...ck.org>, <linux-kernel@...r.kernel.org>,
Abhishek Shah <abhishek.shah@...umbia.edu>,
Gabriel Ryan <gabe@...columbia.edu>
Subject: Re: [PATCH] mm: ksm: fix data-race in __ksm_enter / run_store
On Tue, 2 Aug 2022 23:15:50 +0800 Kefeng Wang <wangkefeng.wang@...wei.com> wrote:
> Abhishek reported a data-race issue,
OK, but it would be better to perform an analysis of the alleged bug,
describe the potential effects if the race is hit, etc.
> --- a/mm/ksm.c
> +++ b/mm/ksm.c
> @@ -2507,6 +2507,7 @@ int __ksm_enter(struct mm_struct *mm)
> {
> struct mm_slot *mm_slot;
> int needs_wakeup;
> + bool ksm_run_unmerge;
>
> mm_slot = alloc_mm_slot();
> if (!mm_slot)
> @@ -2515,6 +2516,10 @@ int __ksm_enter(struct mm_struct *mm)
> /* Check ksm_run too? Would need tighter locking */
> needs_wakeup = list_empty(&ksm_mm_head.mm_list);
>
> + mutex_lock(&ksm_thread_mutex);
> + ksm_run_unmerge = !!(ksm_run & KSM_RUN_UNMERGE);
> + mutex_unlock(&ksm_thread_mutex);
>
> spin_lock(&ksm_mmlist_lock);
> insert_to_mm_slots_hash(mm, mm_slot);
> /*
> @@ -2527,7 +2532,7 @@ int __ksm_enter(struct mm_struct *mm)
> * scanning cursor, otherwise KSM pages in newly forked mms will be
> * missed: then we might as well insert at the end of the list.
> */
> - if (ksm_run & KSM_RUN_UNMERGE)
> + if (ksm_run_unmerge)
run_store() can alter ksm_run right here, so __ksm_enter() is still
acting on the old setting?
> list_add_tail(&mm_slot->mm_list, &ksm_mm_head.mm_list);
> else
> list_add_tail(&mm_slot->mm_list, &ksm_scan.mm_slot->mm_list);
Powered by blists - more mailing lists