[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200525071819.GD329373@gmail.com>
Date: Mon, 25 May 2020 09:18:19 +0200
From: Ingo Molnar <mingo@...nel.org>
To: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
Cc: linux-kernel@...r.kernel.org,
Peter Zijlstra <peterz@...radead.org>,
Steven Rostedt <rostedt@...dmis.org>,
Will Deacon <will@...nel.org>,
Thomas Gleixner <tglx@...utronix.de>,
"Paul E . McKenney" <paulmck@...nel.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Matthew Wilcox <willy@...radead.org>,
Mike Galbraith <umgwanakikbuti@...il.com>,
Evgeniy Polyakov <zbr@...emap.net>, netdev@...r.kernel.org
Subject: Re: [PATCH v2 5/7] connector/cn_proc: Protect send_msg() with a
local lock
* Sebastian Andrzej Siewior <bigeasy@...utronix.de> wrote:
> From: Mike Galbraith <umgwanakikbuti@...il.com>
>
> send_msg() disables preemption to avoid out-of-order messages. As the
> code inside the preempt disabled section acquires regular spinlocks,
> which are converted to 'sleeping' spinlocks on a PREEMPT_RT kernel and
> eventually calls into a memory allocator, this conflicts with the RT
> semantics.
>
> Convert it to a local_lock which allows RT kernels to substitute them with
> a real per CPU lock. On non RT kernels this maps to preempt_disable() as
> before. No functional change.
>
> [bigeasy: Patch description]
>
> Cc: Evgeniy Polyakov <zbr@...emap.net>
> Cc: netdev@...r.kernel.org
> Signed-off-by: Mike Galbraith <umgwanakikbuti@...il.com>
> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
> ---
> drivers/connector/cn_proc.c | 22 +++++++++++++++-------
> 1 file changed, 15 insertions(+), 7 deletions(-)
>
> diff --git a/drivers/connector/cn_proc.c b/drivers/connector/cn_proc.c
> index d58ce664da843..d424d1f469136 100644
> --- a/drivers/connector/cn_proc.c
> +++ b/drivers/connector/cn_proc.c
> @@ -18,6 +18,7 @@
> #include <linux/pid_namespace.h>
>
> #include <linux/cn_proc.h>
> +#include <linux/locallock.h>
>
> /*
> * Size of a cn_msg followed by a proc_event structure. Since the
> @@ -38,25 +39,32 @@ static inline struct cn_msg *buffer_to_cn_msg(__u8 *buffer)
> static atomic_t proc_event_num_listeners = ATOMIC_INIT(0);
> static struct cb_id cn_proc_event_id = { CN_IDX_PROC, CN_VAL_PROC };
>
> -/* proc_event_counts is used as the sequence number of the netlink message */
> -static DEFINE_PER_CPU(__u32, proc_event_counts) = { 0 };
> +/* local_evt.counts is used as the sequence number of the netlink message */
> +struct local_evt {
> + __u32 counts;
> + struct local_lock lock;
> +};
> +static DEFINE_PER_CPU(struct local_evt, local_evt) = {
> + .counts = 0,
I don't think zero initializations need to be written out explicitly.
> + .lock = INIT_LOCAL_LOCK(lock),
> +};
>
> static inline void send_msg(struct cn_msg *msg)
> {
> - preempt_disable();
> + local_lock(&local_evt.lock);
>
> - msg->seq = __this_cpu_inc_return(proc_event_counts) - 1;
> + msg->seq = __this_cpu_inc_return(local_evt.counts) - 1;
Naming nit: renaming this from 'proc_event_counts' to
'local_evt.counts' is a step back IMO - what's an 'evt',
did we run out of e's? ;-)
Should be something like local_event.count? (Singular.)
Thanks,
Ingo
Powered by blists - more mailing lists