[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1f01c041-cc6e-e27e-7691-63c903d1fed7@grimberg.me>
Date: Fri, 20 Sep 2019 10:09:47 -0700
From: Sagi Grimberg <sagi@...mberg.me>
To: Ming Lei <ming.lei@...hat.com>
Cc: Keith Busch <keith.busch@...el.com>,
Hannes Reinecke <hare@...e.com>,
Daniel Lezcano <daniel.lezcano@...aro.org>,
Bart Van Assche <bvanassche@....org>,
linux-scsi@...r.kernel.org, Peter Zijlstra <peterz@...radead.org>,
Long Li <longli@...rosoft.com>,
John Garry <john.garry@...wei.com>,
LKML <linux-kernel@...r.kernel.org>,
linux-nvme@...ts.infradead.org, Jens Axboe <axboe@...com>,
Ingo Molnar <mingo@...hat.com>,
Thomas Gleixner <tglx@...utronix.de>,
Christoph Hellwig <hch@....de>
Subject: Re: [PATCH 1/4] softirq: implement IRQ flood detection mechanism
>> It seems like we're attempting to stay in irq context for as long as we
>> can instead of scheduling to softirq/thread context if we have more than
>> a minimal amount of work to do. Without at least understanding why
>> softirq/thread degrades us so much this code seems like the wrong
>> approach to me. Interrupt context will always be faster, but it is
>> not a sufficient reason to spend as much time as possible there, is it?
>
> If extra latency is added in IO completion path, this latency will be
> introduced in the submission path, because the hw queue depth is fixed,
> which is often small. Especially in case of multiple submission vs.
> single(shared) completion, the whole hw queue tags can be exhausted
> easily.
This is why the patch is reaping the first batch from hard-irq, but only
if it has more will defer to softirq. So I'm not sure the short QD use
case applies here...
> I guess no such effect for networking IO.
Maybe, maybe not...
Powered by blists - more mailing lists