[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7cb33d0d-5d40-b77b-3522-317a107794d6@linaro.org>
Date: Fri, 6 Sep 2019 06:36:23 +0200
From: Daniel Lezcano <daniel.lezcano@...aro.org>
To: Long Li <longli@...rosoft.com>, Ming Lei <ming.lei@...hat.com>
Cc: Bart Van Assche <bvanassche@....org>, Jens Axboe <axboe@...com>,
Hannes Reinecke <hare@...e.com>,
Sagi Grimberg <sagi@...mberg.me>,
"linux-scsi@...r.kernel.org" <linux-scsi@...r.kernel.org>,
Peter Zijlstra <peterz@...radead.org>,
John Garry <john.garry@...wei.com>,
LKML <linux-kernel@...r.kernel.org>,
"linux-nvme@...ts.infradead.org" <linux-nvme@...ts.infradead.org>,
Keith Busch <keith.busch@...el.com>,
Ingo Molnar <mingo@...hat.com>,
Thomas Gleixner <tglx@...utronix.de>,
Christoph Hellwig <hch@....de>
Subject: Re: [PATCH 1/4] softirq: implement IRQ flood detection mechanism
On 06/09/2019 03:22, Long Li wrote:
[ ... ]
>
> Tracing shows that the CPU was in either hardirq or softirq all the
> time before warnings. During tests, the system was unresponsive at
> times.
>
> Ming's patch fixed this problem. The system was responsive throughout
> tests.
>
> As for performance hit, both resulted in a small drop in peak IOPS.
> With IRQ_TIME_ACCOUNTING I see a 3% drop. With Ming's patch it is 1%
> drop.
Do you mean IRQ_TIME_ACCOUNTING + irq threaded ?
> For the tests, I used the following fio command on 10 NVMe disks: fio
> --bs=4k --ioengine=libaio --iodepth=128
> --filename=/dev/nvme0n1:/dev/nvme1n1:/dev/nvme2n1:/dev/nvme3n1:/dev/nvme4n1:/dev/nvme5n1:/dev/nvme6n1:/dev/nvme7n1:/dev/nvme8n1:/dev/nvme9n1
> --direct=1 --runtime=12000 --numjobs=80 --rw=randread --name=test
> --group_reporting --gtod_reduce=1
--
<http://www.linaro.org/> Linaro.org │ Open source software for ARM SoCs
Follow Linaro: <http://www.facebook.com/pages/Linaro> Facebook |
<http://twitter.com/#!/linaroorg> Twitter |
<http://www.linaro.org/linaro-blog/> Blog
Powered by blists - more mailing lists