[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201102181238.GA17806@infradead.org>
Date: Mon, 2 Nov 2020 18:12:38 +0000
From: Christoph Hellwig <hch@...radead.org>
To: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
Cc: Jens Axboe <axboe@...nel.dk>, Sagi Grimberg <sagi@...mberg.me>,
Christoph Hellwig <hch@...radead.org>,
linux-block@...r.kernel.org, Thomas Gleixner <tglx@...utronix.de>,
David Runge <dave@...epmap.de>, linux-rt-users@...r.kernel.org,
linux-kernel@...r.kernel.org,
Peter Zijlstra <peterz@...radead.org>,
Daniel Wagner <dwagner@...e.de>, Mike Galbraith <efault@....de>
Subject: Re: [PATCH 3/3] blk-mq: Use llist_head for blk_cpu_done
On Mon, Nov 02, 2020 at 10:55:33AM +0100, Sebastian Andrzej Siewior wrote:
> On 2020-10-31 09:00:49 [-0600], Jens Axboe wrote:
> > There really aren't any rules for this, and it's perfectly legit to
> > complete from process context. Maybe you're a kthread driven driver and
> > that's how you handle completions. The block completion path has always
> > been hard IRQ safe, but possible to call from anywhere.
>
> I'm not trying to put restrictions and forbidding completions from a
> kthread. I'm trying to avoid the pointless softirq dance for no added
> value. We could:
> to not break that assumption you just mentioned and provide
> |static inline void blk_mq_complete_request_local(struct request *rq)
> |{
> | rq->q->mq_ops->complete(rq);
> |}
>
> so that completion issued from from process context (like those from
> usb-storage) don't end up waking `ksoftird' (running at SCHED_OTHER)
> completing the requests but rather performing it right away. The softirq
> dance makes no sense here.
Agreed. But I don't think your above blk_mq_complete_request_local
is all that useful either as ->complete is defined by the caller,
so we could just do a direct call. Basically we should just
return false from blk_mq_complete_request_remote after updating
the state when called from process context. But given that IIRC
we are not supposed to check what state we are called from
we'll need a helper just for updating the state instead and
ensure the driver uses the right helper. Now of course we might
have process context callers that still want to bounce to the
submitting CPU, but in that case we should go directly to a
workqueue or similar.
Either way doing this properly will probabl involve an audit of all
drivers, but I think that is worth it.
Powered by blists - more mailing lists