[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190907000100.GC12290@ming.t460p>
Date: Sat, 7 Sep 2019 08:01:01 +0800
From: Ming Lei <ming.lei@...hat.com>
To: Sagi Grimberg <sagi@...mberg.me>
Cc: Daniel Lezcano <daniel.lezcano@...aro.org>,
Keith Busch <keith.busch@...el.com>,
Hannes Reinecke <hare@...e.com>,
Bart Van Assche <bvanassche@....org>,
linux-scsi@...r.kernel.org, Peter Zijlstra <peterz@...radead.org>,
Long Li <longli@...rosoft.com>,
John Garry <john.garry@...wei.com>,
LKML <linux-kernel@...r.kernel.org>,
linux-nvme@...ts.infradead.org, Jens Axboe <axboe@...com>,
Ingo Molnar <mingo@...hat.com>,
Thomas Gleixner <tglx@...utronix.de>,
Christoph Hellwig <hch@....de>
Subject: Re: [PATCH 1/4] softirq: implement IRQ flood detection mechanism
On Fri, Sep 06, 2019 at 11:30:57AM -0700, Sagi Grimberg wrote:
>
> >
> > Ok, so the real problem is per-cpu bounded tasks.
> >
> > I share Thomas opinion about a NAPI like approach.
>
> We already have that, its irq_poll, but it seems that for this
> use-case, we get lower performance for some reason. I'm not
> entirely sure why that is, maybe its because we need to mask interrupts
> because we don't have an "arm" register in nvme like network devices
> have?
Long observed that IOPS drops much too by switching to threaded irq. If
softirqd is waken up for handing softirq, the performance shouldn't
be better than threaded irq. Especially, Long found that context
switch is increased a lot after applying your irq poll patch.
http://lists.infradead.org/pipermail/linux-nvme/2019-August/026788.html
Thanks,
Ming
Powered by blists - more mailing lists