[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1623125298.bx63h3mopj.astroid@bobo.none>
Date: Tue, 08 Jun 2021 14:11:04 +1000
From: Nicholas Piggin <npiggin@...il.com>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: Anton Blanchard <anton@...abs.org>, linux-arch@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
linuxppc-dev@...ts.ozlabs.org, Andy Lutomirski <luto@...nel.org>,
Randy
Dunlap <rdunlap@...radead.org>
Subject: Re: [PATCH v4 1/4] lazy tlb: introduce lazy mm refcount helper
functions
Excerpts from Andrew Morton's message of June 8, 2021 11:48 am:
> On Tue, 08 Jun 2021 11:39:56 +1000 Nicholas Piggin <npiggin@...il.com> wrote:
>
>> > Looks like a functional change. What's happening here?
>>
>> That's kthread_use_mm being clever about the lazy tlb mm. If it happened
>> that the kthread had inherited a the lazy tlb mm that happens to be the
>> one we want to use here, then we already have a refcount to it via the
>> lazy tlb ref.
>>
>> So then it doesn't have to touch the refcount, but rather just converts
>> it from the lazy tlb ref to the returned reference. If the lazy tlb mm
>> doesn't get a reference, we can't do that.
>
> Please cover this in the changelog and perhaps a code comment.
>
Yeah fair enough, I'll even throw in a bug fix as well (your nose was right,
and it was too clever for me by half...)
Thanks,
Nick
--
Fix a refcounting bug in kthread_use_mm (the mm reference is increased
unconditionally now, but the lazy tlb refcount is still only dropped only
if mm != active_mm).
And an update for the changelog:
If a kernel thread's current lazy tlb mm happens to be the one it wants to
use, then kthread_use_mm() cleverly transfers the mm refcount from the
lazy tlb mm reference to the returned reference. If the lazy tlb mm
reference is no longer identical to a normal reference, this trick does not
work, so that is changed to be explicit about the two references.
Signed-off-by: Nicholas Piggin <npiggin@...il.com>
---
kernel/kthread.c | 12 +++++++-----
1 file changed, 7 insertions(+), 5 deletions(-)
diff --git a/kernel/kthread.c b/kernel/kthread.c
index b70e28431a01..5e9797b2d06e 100644
--- a/kernel/kthread.c
+++ b/kernel/kthread.c
@@ -1314,6 +1314,11 @@ void kthread_use_mm(struct mm_struct *mm)
WARN_ON_ONCE(!(tsk->flags & PF_KTHREAD));
WARN_ON_ONCE(tsk->mm);
+ /*
+ * It's possible that tsk->active_mm == mm here, but we must
+ * still mmgrab(mm) and mmdrop_lazy_tlb(active_mm), because lazy
+ * mm may not have its own refcount (see mmgrab/drop_lazy_tlb()).
+ */
mmgrab(mm);
task_lock(tsk);
@@ -1338,12 +1343,9 @@ void kthread_use_mm(struct mm_struct *mm)
* memory barrier after storing to tsk->mm, before accessing
* user-space memory. A full memory barrier for membarrier
* {PRIVATE,GLOBAL}_EXPEDITED is implicitly provided by
- * mmdrop(), or explicitly with smp_mb().
+ * mmdrop_lazy_tlb().
*/
- if (active_mm != mm)
- mmdrop_lazy_tlb(active_mm);
- else
- smp_mb();
+ mmdrop_lazy_tlb(active_mm);
to_kthread(tsk)->oldfs = force_uaccess_begin();
}
--
2.23.0
Powered by blists - more mailing lists