lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YcBVcOkWumdpjtvI@linutronix.de>
Date:   Mon, 20 Dec 2021 11:05:36 +0100
From:   Sebastian Andrzej Siewior <bigeasy@...utronix.de>
To:     Wander Lairson Costa <wander@...hat.com>
Cc:     linux-kernel@...r.kernel.org, Jens Axboe <axboe@...nel.dk>,
        Steven Rostedt <rostedt@...dmis.org>,
        Ingo Molnar <mingo@...hat.com>,
        "open list:BLOCK LAYER" <linux-block@...r.kernel.org>,
        Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [PATCH v3] blktrace: switch trace spinlock to a raw spinlock

On 2021-12-17 11:16:56 [-0300], Wander Lairson Costa wrote:

Assuming neither Steven nor Jens object,

> TRACE_EVENT disables preemption before calling the callback. Because of
> that blktrace triggers the following bug under PREEMPT_RT:

The tracepoint is invoked with disabled preemption.

>  BUG: sleeping function called from invalid context at kernel/locking/spinlock_rt.c:35
>  in_atomic(): 1, irqs_disabled(): 0, non_block: 0, pid: 119, name: kworker/u2:2
>  5 locks held by kworker/u2:2/119:
>   #0: ffff8c2e4a88f538 ((wq_completion)xfs-cil/dm-0){+.+.}-{0:0}, at: process_one_work+0x200/0x450
>   #1: ffffab3840ac7e68 ((work_completion)(&cil->xc_push_work)){+.+.}-{0:0}, at: process_one_work+0x200/0x450
>   #2: ffff8c2e4a887128 (&cil->xc_ctx_lock){++++}-{3:3}, at: xlog_cil_push_work+0xb7/0x670 [xfs]
>   #3: ffffffffa6a63780 (rcu_read_lock){....}-{1:2}, at: blk_add_trace_bio+0x0/0x1f0
>   #4: ffffffffa6610620 (running_trace_lock){+.+.}-{2:2}, at: __blk_add_trace+0x3ef/0x480
>  Preemption disabled at:
>  [<ffffffffa4d35c05>] migrate_enable+0x45/0x140
>  CPU: 0 PID: 119 Comm: kworker/u2:2 Kdump: loaded Not tainted 5.14.0-25.rt21.25.light.el9.x86_64+debug #1
>  Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011
>  Workqueue: xfs-cil/dm-0 xlog_cil_push_work [xfs]
>  Call Trace:
>   ? migrate_enable+0x45/0x140
>   dump_stack_lvl+0x57/0x7d
>   ___might_sleep.cold+0xe3/0xf7
>   rt_spin_lock+0x3a/0xd0
>   ? __blk_add_trace+0x3ef/0x480
>   __blk_add_trace+0x3ef/0x480
>   blk_add_trace_bio+0x18d/0x1f0
>   trace_block_bio_queue+0xb5/0x150
>   submit_bio_checks+0x1f0/0x520
>   ? sched_clock_cpu+0xb/0x100
>   submit_bio_noacct+0x30/0x1d0
>   ? bio_associate_blkg+0x66/0x190
>   xlog_cil_push_work+0x1b6/0x670 [xfs]
>   ? register_lock_class+0x43/0x4f0
>   ? xfs_swap_extents+0x5f0/0x5f0 [xfs]
>   process_one_work+0x275/0x450
>   ? process_one_work+0x200/0x450
>   worker_thread+0x55/0x3c0
>   ? process_one_work+0x450/0x450
>   kthread+0x188/0x1a0
>   ? set_kthread_struct+0x40/0x40
>   ret_from_fork+0x22/0x30

The above fills 90% of my screen with _no_ additional information. What
about:

   The running_trace_lock protects running_trace_list and is acquired
   within the tracepoint which implies disabled preemption. The spinlock_t
   typed lock can not be acquired with disabled preemption on PREEMPT_RT
   because it becomes a sleeping lock.
   The runtime of the tracepoint depends on the number of entries in
   running_trace_list and has no limit. The blk-tracer is considered debug
   code and higher latencies here are okay.
   
   Make running_trace_lock a raw_spinlock_t

> To avoid this bug, we switch the trace lock to a raw spinlock.

Basically I want to give rationale _why_ changing a lock to
raw_spinlock_t _here_ is okay. I want to avoid that people slap a
s/spinlock_t/raw_spinlock_t/ each time they see warning of this kind.

> Signed-off-by: Wander Lairson Costa <wander@...hat.com>

Sebastian

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ