[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20211129130653.2037928-3-mark.rutland@arm.com>
Date: Mon, 29 Nov 2021 13:06:44 +0000
From: Mark Rutland <mark.rutland@....com>
To: linux-kernel@...r.kernel.org
Cc: benh@...nel.crashing.org, boqun.feng@...il.com, bp@...en8.de,
catalin.marinas@....com, dvyukov@...gle.com, efuller@...hat.com,
elver@...gle.com, ink@...assic.park.msu.ru, jonas@...thpole.se,
juri.lelli@...hat.com, linux@...linux.org.uk, luto@...nel.org,
mark.rutland@....com, mattst88@...il.com, michal.simek@...inx.com,
mingo@...hat.com, mpe@...erman.id.au, npiggin@...il.com,
paulmck@...nel.org, paulus@...ba.org, peterz@...radead.org,
rth@...ddle.net, shorne@...il.com,
stefan.kristiansson@...nalahti.fi, tglx@...utronix.de,
vincent.guittot@...aro.org, will@...nel.org
Subject: [PATCH v8 02/11] entry: snapshot thread flags
Some thread flags can be set remotely, and so even when IRQs are
disabled, the flags can change under our feet. Generally this is
unlikely to cause a problem in practice, but it is somewhat unsound, and
KCSAN will legitimately warn that there is a data race.
To avoid such issues, a snapshot of the flags has to be taken prior to
using them. Some places already use READ_ONCE() for that, others do not.
Convert them all to the new flag accessor helpers.
Signed-off-by: Mark Rutland <mark.rutland@....com>
Acked-by: Paul E. McKenney <paulmck@...nel.org>
Cc: Andy Lutomirski <luto@...nel.org>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Thomas Gleixner <tglx@...utronix.de>
---
include/linux/entry-kvm.h | 2 +-
kernel/entry/common.c | 4 ++--
kernel/entry/kvm.c | 4 ++--
3 files changed, 5 insertions(+), 5 deletions(-)
diff --git a/include/linux/entry-kvm.h b/include/linux/entry-kvm.h
index 0d7865a0731c..07c878d6e323 100644
--- a/include/linux/entry-kvm.h
+++ b/include/linux/entry-kvm.h
@@ -75,7 +75,7 @@ static inline void xfer_to_guest_mode_prepare(void)
*/
static inline bool __xfer_to_guest_mode_work_pending(void)
{
- unsigned long ti_work = READ_ONCE(current_thread_info()->flags);
+ unsigned long ti_work = read_thread_flags();
return !!(ti_work & XFER_TO_GUEST_MODE_WORK);
}
diff --git a/kernel/entry/common.c b/kernel/entry/common.c
index d5a61d565ad5..bad713684c2e 100644
--- a/kernel/entry/common.c
+++ b/kernel/entry/common.c
@@ -187,7 +187,7 @@ static unsigned long exit_to_user_mode_loop(struct pt_regs *regs,
/* Check if any of the above work has queued a deferred wakeup */
tick_nohz_user_enter_prepare();
- ti_work = READ_ONCE(current_thread_info()->flags);
+ ti_work = read_thread_flags();
}
/* Return the latest work state for arch_exit_to_user_mode() */
@@ -196,7 +196,7 @@ static unsigned long exit_to_user_mode_loop(struct pt_regs *regs,
static void exit_to_user_mode_prepare(struct pt_regs *regs)
{
- unsigned long ti_work = READ_ONCE(current_thread_info()->flags);
+ unsigned long ti_work = read_thread_flags();
lockdep_assert_irqs_disabled();
diff --git a/kernel/entry/kvm.c b/kernel/entry/kvm.c
index 49972ee99aff..96d476e06c77 100644
--- a/kernel/entry/kvm.c
+++ b/kernel/entry/kvm.c
@@ -26,7 +26,7 @@ static int xfer_to_guest_mode_work(struct kvm_vcpu *vcpu, unsigned long ti_work)
if (ret)
return ret;
- ti_work = READ_ONCE(current_thread_info()->flags);
+ ti_work = read_thread_flags();
} while (ti_work & XFER_TO_GUEST_MODE_WORK || need_resched());
return 0;
}
@@ -43,7 +43,7 @@ int xfer_to_guest_mode_handle_work(struct kvm_vcpu *vcpu)
* disabled in the inner loop before going into guest mode. No need
* to disable interrupts here.
*/
- ti_work = READ_ONCE(current_thread_info()->flags);
+ ti_work = read_thread_flags();
if (!(ti_work & XFER_TO_GUEST_MODE_WORK))
return 0;
--
2.30.2
Powered by blists - more mailing lists