[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20191218191608.GG11457@worktop.programming.kicks-ass.net>
Date: Wed, 18 Dec 2019 20:16:08 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: David Howells <dhowells@...hat.com>
Cc: linux-afs@...ts.infradead.org, Ingo Molnar <mingo@...hat.com>,
Will Deacon <will@...nel.org>,
Davidlohr Bueso <dave@...olabs.net>,
linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/2] rxrpc: Don't take call->user_mutex in
rxrpc_new_incoming_call()
On Wed, Dec 18, 2019 at 05:54:58PM +0000, David Howells wrote:
> Standard kernel mutexes cannot be used in any way from interrupt or softirq
> context, so the user_mutex which manages access to a call cannot be a mutex
> since on a new call the mutex must start off locked and be unlocked within
> the softirq handler to prevent userspace interfering with a call we're
> setting up.
>
> Commit a0855d24fc22d49cdc25664fb224caee16998683 ("locking/mutex: Complain
> upon mutex API misuse in IRQ contexts") causes big warnings to be splashed
> in dmesg for each a new call that comes in from the server. Whilst it
> *seems* like it should be okay, since the accept path uses trylock, there
> are issues with PI boosting and marking the wrong task as the owner.
>
> Fix this by not taking the mutex in the softirq path at all. It's not
> obvious that there should be any need for it as the state is set before the
> first notification is generated for the new call.
>
> There's also no particular reason why the link-assessing ping should be
> triggered inside the mutex. It's not actually transmitted there anyway,
> but rather it has to be deferred to a workqueue.
>
> Further, I don't think that there's any particular reason that the socket
> notification needs to be done from within rx->incoming_lock, so the amount
> of time that lock is held can be shortened too and the ping prepared before
> the new call notification is sent.
>
Assuming this works, this is the best solution possible! Excellent work.
(I was about to suggest something based on wait_var_event() inside each
mutex_lock(), but this is _much_ nicer)
Thanks!
Powered by blists - more mailing lists