[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d2c15411-5b21-535b-6e07-331ebe22f8c8@grimberg.me>
Date: Thu, 29 Oct 2020 13:03:26 -0700
From: Sagi Grimberg <sagi@...mberg.me>
To: Christoph Hellwig <hch@...radead.org>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>
Cc: linux-block@...r.kernel.org, Thomas Gleixner <tglx@...utronix.de>,
David Runge <dave@...epmap.de>, linux-rt-users@...r.kernel.org,
Jens Axboe <axboe@...nel.dk>, linux-kernel@...r.kernel.org,
Peter Zijlstra <peterz@...radead.org>,
Daniel Wagner <dwagner@...e.de>, Mike Galbraith <efault@....de>
Subject: Re: [PATCH 3/3] blk-mq: Use llist_head for blk_cpu_done
>>> Well, usb-storage obviously seems to do it, and the block layer
>>> does not prohibit it.
>>
>> Also loop, nvme-tcp and then I stopped looking.
>> Any objections about adding local_bh_disable() around it?
>
> To me it seems like the whole IPI plus potentially softirq dance is
> a little pointless when completing from process context.
I agree.
> Sagi, any opinion on that from the nvme-tcp POV?
nvme-tcp should (almost) always complete from the context that matches
the rq->mq_ctx->cpu as the thread that processes incoming
completions (per hctx) should be affinitized to match it (unless cpus
come and go).
So for nvme-tcp I don't expect blk_mq_complete_need_ipi to return true
in normal operation. That leaves the teardowns+aborts, which aren't very
interesting here.
I would note that nvme-tcp does not go to sleep after completing every
I/O like how sebastian indicated usb does.
Having said that, today the network stack is calling nvme_tcp_data_ready
in napi context (softirq) which in turn triggers the queue thread to
handle network rx (and complete the I/O). It's been measured recently
that running the rx context directly in softirq will save some
latency (possible because nvme-tcp rx context is non-blocking).
So I'd think that patch #2 is unnecessary and just add overhead for
nvme-tcp.. do note that the napi softirq cpu mapping depends on the RSS
steering, which is unlikely to match rq->mq_ctx->cpu, hence if completed
from napi context, nvme-tcp will probably always go to the IPI path.
Powered by blists - more mailing lists