[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201214102422.2d84035d@gandalf.local.home>
Date: Mon, 14 Dec 2020 10:24:22 -0500
From: Steven Rostedt <rostedt@...dmis.org>
To: Ming Lei <ming.lei@...hat.com>
Cc: Jens Axboe <axboe@...nel.dk>, linux-block@...r.kernel.org,
Christoph Hellwig <hch@....de>, Ingo Molnar <mingo@...hat.com>,
linux-kernel@...r.kernel.org,
linux-rt-users <linux-rt-users@...r.kernel.org>
Subject: Re: [PATCH] blktrace: fix 'BUG: sleeping function called from
invalid context' in case of PREEMPT_RT
On Mon, 14 Dec 2020 10:22:17 +0800
Ming Lei <ming.lei@...hat.com> wrote:
> trace_note_tsk() is called by __blk_add_trace(), which is covered by RCU read lock.
> So in case of PREEMPT_RT, warning of 'BUG: sleeping function called from invalid context'
> will be triggered because spin lock is converted to rtmutex.
The RCU read_lock() can not be the cause of this issue, because under
PREEMPT_RT, rcu_read_lock() can be preempted.
What was the full back trace of this problem?
>
> Fix the issue by converting running_trace_lock into raw_spin_lock().
>
> Cc: Christoph Hellwig <hch@....de>
> Cc: Steven Rostedt <rostedt@...dmis.org>
> Cc: Ingo Molnar <mingo@...hat.com>
> Cc: linux-kernel@...r.kernel.org
> Signed-off-by: Ming Lei <ming.lei@...hat.com>
> ---
> kernel/trace/blktrace.c | 14 +++++++-------
> 1 file changed, 7 insertions(+), 7 deletions(-)
>
> diff --git a/kernel/trace/blktrace.c b/kernel/trace/blktrace.c
> index 2c5b3c5317c2..53dc876d669d 100644
> --- a/kernel/trace/blktrace.c
> +++ b/kernel/trace/blktrace.c
> @@ -34,7 +34,7 @@ static struct trace_array *blk_tr;
> static bool blk_tracer_enabled __read_mostly;
>
> static LIST_HEAD(running_trace_list);
> -static __cacheline_aligned_in_smp DEFINE_SPINLOCK(running_trace_lock);
> +static __cacheline_aligned_in_smp DEFINE_RAW_SPINLOCK(running_trace_lock);
>
> /* Select an alternative, minimalistic output than the original one */
> #define TRACE_BLK_OPT_CLASSIC 0x1
> @@ -121,12 +121,12 @@ static void trace_note_tsk(struct task_struct *tsk)
> struct blk_trace *bt;
>
> tsk->btrace_seq = blktrace_seq;
> - spin_lock_irqsave(&running_trace_lock, flags);
> + raw_spin_lock_irqsave(&running_trace_lock, flags);
> list_for_each_entry(bt, &running_trace_list, running_list) {
> trace_note(bt, tsk->pid, BLK_TN_PROCESS, tsk->comm,
> sizeof(tsk->comm), 0);
> }
How big is this running_trace_list? May not be something we want raw locks
around.
Please understand that converting locks to raw should be the last resort.
One should always look at the reason for a spin lock in a preempt disabled
area and see if there's other means of solving it before simply switch a
lock to raw, as each raw spinlock makes PREEMPT_RT less real time.
-- Steve
> - spin_unlock_irqrestore(&running_trace_lock, flags);
> + raw_spin_unlock_irqrestore(&running_trace_lock, flags);
> }
>
> static void trace_note_time(struct blk_trace *bt)
> @@ -669,9 +669,9 @@ static int __blk_trace_startstop(struct request_queue *q, int start)
> blktrace_seq++;
> smp_mb();
> bt->trace_state = Blktrace_running;
> - spin_lock_irq(&running_trace_lock);
> + raw_spin_lock_irq(&running_trace_lock);
> list_add(&bt->running_list, &running_trace_list);
> - spin_unlock_irq(&running_trace_lock);
> + raw_spin_unlock_irq(&running_trace_lock);
>
> trace_note_time(bt);
> ret = 0;
> @@ -679,9 +679,9 @@ static int __blk_trace_startstop(struct request_queue *q, int start)
> } else {
> if (bt->trace_state == Blktrace_running) {
> bt->trace_state = Blktrace_stopped;
> - spin_lock_irq(&running_trace_lock);
> + raw_spin_lock_irq(&running_trace_lock);
> list_del_init(&bt->running_list);
> - spin_unlock_irq(&running_trace_lock);
> + raw_spin_unlock_irq(&running_trace_lock);
> relay_flush(bt->rchan);
> ret = 0;
> }
Powered by blists - more mailing lists