lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 29 Oct 2020 14:07:59 -0700
From:   Sagi Grimberg <>
To:     Sebastian Andrzej Siewior <>
Cc:     Christoph Hellwig <>,,
        Thomas Gleixner <>,
        David Runge <>,,
        Jens Axboe <>,,
        Peter Zijlstra <>,
        Daniel Wagner <>, Mike Galbraith <>
Subject: Re: [PATCH 3/3] blk-mq: Use llist_head for blk_cpu_done

>>>>> Well, usb-storage obviously seems to do it, and the block layer
>>>>> does not prohibit it.
>>>> Also loop, nvme-tcp and then I stopped looking.
>>>> Any objections about adding local_bh_disable() around it?
>>> To me it seems like the whole IPI plus potentially softirq dance is
>>> a little pointless when completing from process context.
>> I agree.
>>> Sagi, any opinion on that from the nvme-tcp POV?
>> nvme-tcp should (almost) always complete from the context that matches
>> the rq->mq_ctx->cpu as the thread that processes incoming
>> completions (per hctx) should be affinitized to match it (unless cpus
>> come and go).
> in which context?

Not sure what is the question.

> But this is probably nr_hw_queues > 1?


>> So for nvme-tcp I don't expect blk_mq_complete_need_ipi to return true
>> in normal operation. That leaves the teardowns+aborts, which aren't very
>> interesting here.
> The process context invocation is nvme_tcp_complete_timed_out().


>> I would note that nvme-tcp does not go to sleep after completing every
>> I/O like how sebastian indicated usb does.
>> Having said that, today the network stack is calling nvme_tcp_data_ready
>> in napi context (softirq) which in turn triggers the queue thread to
>> handle network rx (and complete the I/O). It's been measured recently
>> that running the rx context directly in softirq will save some
>> latency (possible because nvme-tcp rx context is non-blocking).
>> So I'd think that patch #2 is unnecessary and just add overhead for
>> nvme-tcp.. do note that the napi softirq cpu mapping depends on the RSS
>> steering, which is unlikely to match rq->mq_ctx->cpu, hence if completed
>> from napi context, nvme-tcp will probably always go to the IPI path.
> but running it in softirq on the remote CPU would still allow of other
> packets to come on the remote CPU (which would block BLOCK sofirq if
> NET_RX is already running).

Not sure I understand your comment, if napi triggers on core X and we
complete from that, it will trigger IPI to core Y, and there with patch 
#2 is will trigger softirq instead of calling ->complete directly no?

Powered by blists - more mailing lists