[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201027092606.GA20805@infradead.org>
Date: Tue, 27 Oct 2020 09:26:06 +0000
From: Christoph Hellwig <hch@...radead.org>
To: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
Cc: Christoph Hellwig <hch@...radead.org>,
David Runge <dave@...epmap.de>, linux-rt-users@...r.kernel.org,
Jens Axboe <axboe@...nel.dk>, linux-block@...r.kernel.org,
linux-kernel@...r.kernel.org,
Peter Zijlstra <peterz@...radead.org>,
Thomas Gleixner <tglx@...utronix.de>,
Daniel Wagner <dwagner@...e.de>
Subject: Re: [PATCH RFC] blk-mq: Don't IPI requests on PREEMPT_RT
On Fri, Oct 23, 2020 at 03:52:19PM +0200, Sebastian Andrzej Siewior wrote:
> On 2020-10-23 12:21:30 [+0100], Christoph Hellwig wrote:
> > > - if (!IS_ENABLED(CONFIG_SMP) ||
> > > + if (!IS_ENABLED(CONFIG_SMP) || IS_ENABLED(CONFIG_PREEMPT_RT) ||
> > > !test_bit(QUEUE_FLAG_SAME_COMP, &rq->q->queue_flags))
> >
> > This needs a big fat comment explaining your rationale. And probably
> > a separate if statement to make it obvious as well.
>
> Okay.
> How much difference does it make between completing in-softirq vs
> in-IPI?
For normal non-RT builds? This introduces another context switch, which
for the latencies we are aiming for is noticable.
> I'm asking because acquiring a spinlock_t in an IPI shouldn't be
> done (as per Documentation/locking/locktypes.rst). We don't have
> anything in lockdep that will complain here on !RT and we the above we
> avoid the case on RT.
At least for NVMe we aren't taking locks, but with the number of drivers
Powered by blists - more mailing lists