[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZCXbR+Pnff6jrstu@boqun-archlinux>
Date: Thu, 30 Mar 2023 11:56:07 -0700
From: Boqun Feng <boqun.feng@...il.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Wedson Almeida Filho <wedsonaf@...il.com>,
rust-for-linux@...r.kernel.org, Miguel Ojeda <ojeda@...nel.org>,
Alex Gaynor <alex.gaynor@...il.com>,
Gary Guo <gary@...yguo.net>,
Björn Roy Baron <bjorn3_gh@...tonmail.com>,
linux-kernel@...r.kernel.org,
Wedson Almeida Filho <walmeida@...rosoft.com>,
Ingo Molnar <mingo@...hat.com>, Will Deacon <will@...nel.org>,
Waiman Long <longman@...hat.com>
Subject: Re: [PATCH 03/13] rust: lock: introduce `Mutex`
On Thu, Mar 30, 2023 at 11:47:12AM -0700, Boqun Feng wrote:
> On Thu, Mar 30, 2023 at 03:01:08PM +0200, Peter Zijlstra wrote:
> > On Thu, Mar 30, 2023 at 01:39:44AM -0300, Wedson Almeida Filho wrote:
> > > From: Wedson Almeida Filho <walmeida@...rosoft.com>
> > >
> > > This is the `struct mutex` lock backend and allows Rust code to use the
> > > kernel mutex idiomatically.
> >
> > What, if anything, are the plans to support the various lockdep
> > annotations? Idem for the spinlock thing in the other patch I suppose.
>
> FWIW:
>
> * At the init stage, SpinLock and Mutex in Rust use initializers
> that are aware of the lockdep, so everything (lockdep_map and
> lock_class) is all set up.
>
> * At acquire or release time, Rust locks just use ffi to call C
> functions that have lockdep annotations in them, so lockdep
> should just work.
>
> In fact, I shared some same worry as you, so I already work on adding
> lockdep selftests for Rust lock APIs, will send them shortly, although
> they are just draft.
>
Needless to say, the test shows that lockdep works for deadlock
detection (although currently they are only simple cases):
[...] locking selftest: Selftests for Rust locking APIs
[...] rust_locking_selftest::SpinLockAATest:
[...]
[...] ============================================
[...] WARNING: possible recursive locking detected
[...] 6.3.0-rc1-00049-gee35790bd43e-dirty #99 Not tainted
[...] --------------------------------------------
[...] swapper/0/0 is trying to acquire lock:
[...] ffffffff8c603e30 (A1){+.+.}-{2:2}, at: _RNvXNtNtNtCs1t6xtuX2C8s_6kernel4sync4lock8spinlockNtB2_15SpinLockBackendNtB4_7Backend4lock+0x6/0x10
[...]
[...] but task is already holding lock:
[...] ffffffff8c603de0 (A1){+.+.}-{2:2}, at: _RNvXNtNtNtCs1t6xtuX2C8s_6kernel4sync4lock8spinlockNtB2_15SpinLockBackendNtB4_7Backend4lock+0x6/0x10
[...]
[...] other info that might help us debug this:
[...] Possible unsafe locking scenario:
[...]
[...] CPU0
[...] ----
[...] lock(A1);
[...] lock(A1);
[...]
[...] *** DEADLOCK ***
[...]
[...] May be due to missing lock nesting notation
[...]
[...] 1 lock held by swapper/0/0:
[...] #0: ffffffff8c603de0 (A1){+.+.}-{2:2}, at: _RNvXNtNtNtCs1t6xtuX2C8s_6kernel4sync4lock8spinlockNtB2_15SpinLockBackendNtB4_7Backend4lock+0x6/0x10
[...]
[...] stack backtrace:
[...] CPU: 0 PID: 0 Comm: swapper/0 Not tainted 6.3.0-rc1-00049-gee35790bd43e-dirty #99
[...] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Arch Linux 1.16.1-1-1 04/01/2014
[...] Call Trace:
[...] <TASK>
[...] dump_stack_lvl+0x6d/0xa0
[...] __lock_acquire+0x825/0x2e20
[...] ? __lock_acquire+0x626/0x2e20
[...] ? prb_read_valid+0x24/0x50
[...] ? printk_get_next_message+0xf6/0x380
[...] ? _RNvXNtNtNtCs1t6xtuX2C8s_6kernel4sync4lock8spinlockNtB2_15SpinLockBackendNtB4_7Backend4lock+0x6/0x10
[...] lock_acquire+0x109/0x2c0
[...] ? _RNvXNtNtNtCs1t6xtuX2C8s_6kernel4sync4lock8spinlockNtB2_15SpinLockBackendNtB4_7Backend4lock+0x6/0x10
[...] _raw_spin_lock+0x2e/0x40
[...] ? _RNvXNtNtNtCs1t6xtuX2C8s_6kernel4sync4lock8spinlockNtB2_15SpinLockBackendNtB4_7Backend4lock+0x6/0x10
[...] _RNvXNtNtNtCs1t6xtuX2C8s_6kernel4sync4lock8spinlockNtB2_15SpinLockBackendNtB4_7Backend4lock+0x6/0x10
[...] _RNvXCsaDWbe1gW6fC_21rust_locking_selftestNtB2_14SpinLockAATestNtB2_8LockTest4test+0xa5/0xe0
[...] ? prb_read_valid+0x24/0x50
[...] dotest+0x5a/0x8d0
[...] rust_locking_test+0xf8/0x210
[...] ? _printk+0x58/0x80
[...] ? local_lock_release+0x60/0x60
[...] locking_selftest+0x328f/0x32b0
[...] start_kernel+0x285/0x420
[...] secondary_startup_64_no_verify+0xe1/0xeb
[...] </TASK>
[...] ok | lockclass mask: 100, debug_locks: 0, expected: 0
Regards,
Boqun
> Regards,
> Boqun
Powered by blists - more mailing lists