[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4f051c35-fb45-4f91-8daf-6ba53c8d4ab9@gmail.com>
Date: Thu, 3 Oct 2024 18:18:35 +0200
From: Dirk Behme <dirk.behme@...il.com>
To: Boqun Feng <boqun.feng@...il.com>
Cc: Dirk Behme <dirk.behme@...bosch.com>,
Andreas Hindborg <a.hindborg@...nel.org>, Miguel Ojeda <ojeda@...nel.org>,
Alex Gaynor <alex.gaynor@...il.com>,
Anna-Maria Behnsen <anna-maria@...utronix.de>,
Frederic Weisbecker <frederic@...nel.org>,
Thomas Gleixner <tglx@...utronix.de>, Gary Guo <gary@...yguo.net>,
Björn Roy Baron <bjorn3_gh@...tonmail.com>,
Benno Lossin <benno.lossin@...ton.me>, Alice Ryhl <aliceryhl@...gle.com>,
rust-for-linux@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 00/14] hrtimer Rust API
Am 03.10.24 um 15:03 schrieb Boqun Feng:
> On Thu, Oct 03, 2024 at 10:14:17AM +0200, Dirk Behme wrote:
>> Am 01.10.24 um 16:42 schrieb Boqun Feng:
>>> On Tue, Oct 01, 2024 at 02:37:46PM +0200, Dirk Behme wrote:
>>>> On 18.09.2024 00:27, Andreas Hindborg wrote:
>>>>> Hi!
>>>>>
>>>>> This series adds support for using the `hrtimer` subsystem from Rust code.
>>>>>
>>>>> I tried breaking up the code in some smaller patches, hopefully that will
>>>>> ease the review process a bit.
>>>>
>>>> Just fyi, having all 14 patches applied I get [1] on the first (doctest)
>>>> Example from hrtimer.rs.
>>>>
>>>> This is from lockdep:
>>>>
>>>> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/kernel/locking/lockdep.c#n4785
>>>>
>>>> Having just a quick look I'm not sure what the root cause is. Maybe mutex in
>>>> interrupt context? Or a more subtle one?
>>>
>>> I think it's calling mutex inside an interrupt context as shown by the
>>> callstack:
>>>
>>> ] __mutex_lock+0xa0/0xa4
>>> ] ...
>>> ] hrtimer_interrupt+0x1d4/0x2ac
>>>
>>> , it is because:
>>>
>>> +//! struct ArcIntrusiveTimer {
>>> +//! #[pin]
>>> +//! timer: Timer<Self>,
>>> +//! #[pin]
>>> +//! flag: Mutex<bool>,
>>> +//! #[pin]
>>> +//! cond: CondVar,
>>> +//! }
>>>
>>> has a Mutex<bool>, which actually should be a SpinLockIrq [1].
>>
>>
>> Two understanding questions:
>>
>
> Good questions. ;-)
:-)
>> 1. In the main thread (full example for reference below [2]) where is the
>> lock released? After the while loop? I.e. is the lock held until guard
>
> With the current implementation, there are two places the lock will be
> released: 1) inside CondVar::wait() and
CondVar::wait() releases *and* reaquires, the lock then? So that
outside of CondVar::wait() but inside the while() loop the lock is
held until the while loop is exit?
Would that lock handling inside CondVar::wait() handle the irq stuff
(irq enable and disable) of SpinLockIrq correctly, then?
> 2) after `guard` is eventually
> drop after the loop.
>
>> reaches 5?
>>
>> let mut guard = has_timer.flag.lock(); // <= lock taken here?
>>
>> while *guard != 5 {
>> has_timer.cond.wait(&mut guard);
>> } // <= lock
>> released here?
>>
>> I wonder what this would mean for the interrupt TimerCallback in case we
>> would use an irq-off SpinLock instead here?
>>
>> Or maybe:
>>
>> 2. The only place where the guard is modified (*guard += 1;) is in the
>> TimerCallback which runs in interrupt context as we learned. With that
>> writing the guard value can't be interrupted. Couldn't we drop the whole
>> lock, then?
>>
>
> No, because the main thread can run on another CPU, so disabling
> interrupts (because of the interrupt handlers) doesn't mean exclusive
> access to value.
Yes. I agree if the main thread would write. But that main thread does
read-only accesses, only? So it reads either the old or the new value,
indepenent on the locking? Only the interrupt handler does
read/modify/write. But thats protected by the interrupt context, already.
Dirk
>> Best regards
>>
>> Dirk
>>
>>
>> [2]
>>
>> //! #[pin_data]
>> //! struct ArcIntrusiveTimer {
>> //! #[pin]
>> //! timer: Timer<Self>,
>> //! #[pin]
>> //! flag: Mutex<u64>,
>> //! #[pin]
>> //! cond: CondVar,
>> //! }
>> //!
>> //! impl ArcIntrusiveTimer {
>> //! fn new() -> impl PinInit<Self, kernel::error::Error> {
>> //! try_pin_init!(Self {
>> //! timer <- Timer::new(),
>> //! flag <- new_mutex!(0),
>> //! cond <- new_condvar!(),
>> //! })
>> //! }
>> //! }
>> //!
>> //! impl TimerCallback for ArcIntrusiveTimer {
>> //! type CallbackTarget<'a> = Arc<Self>;
>> //! type CallbackPointer<'a> = Arc<Self>;
>> //!
>> //! fn run(this: Self::CallbackTarget<'_>) -> TimerRestart {
>> //! pr_info!("Timer called\n");
>> //! let mut guard = this.flag.lock();
>> //! *guard += 1;
>> //! this.cond.notify_all();
>> //! if *guard == 5 {
>> //! TimerRestart::NoRestart
>> //! }
>> //! else {
>> //! TimerRestart::Restart
>> //!
>> //! }
>> //! }
>> //! }
>> //!
>> //! impl_has_timer! {
>> //! impl HasTimer<Self> for ArcIntrusiveTimer { self.timer }
>> //! }
>> //!
>> //!
>> //! let has_timer = Arc::pin_init(ArcIntrusiveTimer::new(), GFP_KERNEL)?;
>> //! let _handle = has_timer.clone().schedule(Ktime::from_ns(200_000_000));
>> //! let mut guard = has_timer.flag.lock();
>> //!
>> //! while *guard != 5 {
>> //! has_timer.cond.wait(&mut guard);
>> //! }
>> //!
>> //! pr_info!("Counted to 5\n");
>> //! # Ok::<(), kernel::error::Error>(())
>>
>>
>>
> [...]
Powered by blists - more mailing lists