[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20200527201119.1692513-6-bigeasy@linutronix.de>
Date: Wed, 27 May 2020 22:11:17 +0200
From: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
To: linux-kernel@...r.kernel.org
Cc: Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...nel.org>,
Steven Rostedt <rostedt@...dmis.org>,
Will Deacon <will@...nel.org>,
Thomas Gleixner <tglx@...utronix.de>,
"Paul E . McKenney" <paulmck@...nel.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Matthew Wilcox <willy@...radead.org>,
Mike Galbraith <umgwanakikbuti@...il.com>,
Evgeniy Polyakov <zbr@...emap.net>, netdev@...r.kernel.org,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>
Subject: [PATCH v3 5/7] connector/cn_proc: Protect send_msg() with a local lock
From: Mike Galbraith <umgwanakikbuti@...il.com>
send_msg() disables preemption to avoid out-of-order messages. As the
code inside the preempt disabled section acquires regular spinlocks,
which are converted to 'sleeping' spinlocks on a PREEMPT_RT kernel and
eventually calls into a memory allocator, this conflicts with the RT
semantics.
Convert it to a local_lock which allows RT kernels to substitute them with
a real per CPU lock. On non RT kernels this maps to preempt_disable() as
before. No functional change.
[bigeasy: Patch description]
Cc: Evgeniy Polyakov <zbr@...emap.net>
Cc: netdev@...r.kernel.org
Signed-off-by: Mike Galbraith <umgwanakikbuti@...il.com>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
---
drivers/connector/cn_proc.c | 21 ++++++++++++++-------
1 file changed, 14 insertions(+), 7 deletions(-)
diff --git a/drivers/connector/cn_proc.c b/drivers/connector/cn_proc.c
index d58ce664da843..646ad385e4904 100644
--- a/drivers/connector/cn_proc.c
+++ b/drivers/connector/cn_proc.c
@@ -18,6 +18,7 @@
#include <linux/pid_namespace.h>
#include <linux/cn_proc.h>
+#include <linux/local_lock.h>
/*
* Size of a cn_msg followed by a proc_event structure. Since the
@@ -38,25 +39,31 @@ static inline struct cn_msg *buffer_to_cn_msg(__u8 *buffer)
static atomic_t proc_event_num_listeners = ATOMIC_INIT(0);
static struct cb_id cn_proc_event_id = { CN_IDX_PROC, CN_VAL_PROC };
-/* proc_event_counts is used as the sequence number of the netlink message */
-static DEFINE_PER_CPU(__u32, proc_event_counts) = { 0 };
+/* local_event.count is used as the sequence number of the netlink message */
+struct local_event {
+ local_lock_t lock;
+ __u32 count;
+};
+static DEFINE_PER_CPU(struct local_event, local_event) = {
+ .lock = INIT_LOCAL_LOCK(lock),
+};
static inline void send_msg(struct cn_msg *msg)
{
- preempt_disable();
+ local_lock(&local_event.lock);
- msg->seq = __this_cpu_inc_return(proc_event_counts) - 1;
+ msg->seq = __this_cpu_inc_return(local_event.count) - 1;
((struct proc_event *)msg->data)->cpu = smp_processor_id();
/*
- * Preemption remains disabled during send to ensure the messages are
- * ordered according to their sequence numbers.
+ * local_lock() disables preemption during send to ensure the messages
+ * are ordered according to their sequence numbers.
*
* If cn_netlink_send() fails, the data is not sent.
*/
cn_netlink_send(msg, 0, CN_IDX_PROC, GFP_NOWAIT);
- preempt_enable();
+ local_unlock(&local_event.lock);
}
void proc_fork_connector(struct task_struct *task)
--
2.27.0.rc0
Powered by blists - more mailing lists