lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CY4PR21MB0741D1CD295AD572548E61D1CEAA0@CY4PR21MB0741.namprd21.prod.outlook.com>
Date:   Wed, 21 Aug 2019 07:47:44 +0000
From:   Long Li <longli@...rosoft.com>
To:     John Garry <john.garry@...wei.com>,
        Ming Lei <tom.leiming@...il.com>,
        "longli@...uxonhyperv.com" <longli@...uxonhyperv.com>
CC:     Ingo Molnar <mingo@...hat.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Keith Busch <keith.busch@...el.com>, Jens Axboe <axboe@...com>,
        Christoph Hellwig <hch@....de>,
        Sagi Grimberg <sagi@...mberg.me>,
        linux-nvme <linux-nvme@...ts.infradead.org>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        Thomas Gleixner <tglx@...utronix.de>,
        chenxiang <chenxiang66@...ilicon.com>
Subject: RE: [PATCH 0/3] fix interrupt swamp in NVMe

>>>Subject: Re: [PATCH 0/3] fix interrupt swamp in NVMe
>>>
>>>On 20/08/2019 09:25, Ming Lei wrote:
>>>> On Tue, Aug 20, 2019 at 2:14 PM <longli@...uxonhyperv.com> wrote:
>>>>>
>>>>> From: Long Li <longli@...rosoft.com>
>>>>>
>>>>> This patch set tries to fix interrupt swamp in NVMe devices.
>>>>>
>>>>> On large systems with many CPUs, a number of CPUs may share one
>>>NVMe
>>>>> hardware queue. It may have this situation where several CPUs are
>>>>> issuing I/Os, and all the I/Os are returned on the CPU where the
>>>hardware queue is bound to.
>>>>> This may result in that CPU swamped by interrupts and stay in
>>>>> interrupt mode for extended time while other CPUs continue to issue
>>>>> I/O. This can trigger Watchdog and RCU timeout, and make the system
>>>unresponsive.
>>>>>
>>>>> This patch set addresses this by enforcing scheduling and throttling
>>>>> I/O when CPU is starved in this situation.
>>>>>
>>>>> Long Li (3):
>>>>>   sched: define a function to report the number of context switches on a
>>>>>     CPU
>>>>>   sched: export idle_cpu()
>>>>>   nvme: complete request in work queue on CPU with flooded interrupts
>>>>>
>>>>>  drivers/nvme/host/core.c | 57
>>>>> +++++++++++++++++++++++++++++++++++++++-
>>>>>  drivers/nvme/host/nvme.h |  1 +
>>>>>  include/linux/sched.h    |  2 ++
>>>>>  kernel/sched/core.c      |  7 +++++
>>>>>  4 files changed, 66 insertions(+), 1 deletion(-)
>>>>
>>>> Another simpler solution may be to complete request in threaded
>>>> interrupt handler for this case. Meantime allow scheduler to run the
>>>> interrupt thread handler on CPUs specified by the irq affinity mask,
>>>> which was discussed by the following link:
>>>>
>>>>
>>>https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Flor
>>>e
>>>> .kernel.org%2Flkml%2Fe0e9478e-62a5-ca24-3b12-
>>>58f7d056383e%40huawei.com
>>>> %2F&amp;data=02%7C01%7Clongli%40microsoft.com%7Cc7f46d3e273f45
>>>176d1c08
>>>>
>>>d7254cc69e%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C6370188
>>>8401588
>>>>
>>>9866&amp;sdata=h5k6HoGoyDxuhmDfuKLZUwgmw17PU%2BT%2FCbawfxV
>>>Er3U%3D&amp;
>>>> reserved=0
>>>>
>>>> Could you try the above solution and see if the lockup can be avoided?
>>>> John Garry
>>>> should have workable patch.
>>>
>>>Yeah, so we experimented with changing the interrupt handling in the SCSI
>>>driver I maintain to use a threaded handler IRQ handler plus patch below,
>>>and saw a significant throughput boost:
>>>
>>>--->8
>>>
>>>Subject: [PATCH] genirq: Add support to allow thread to use hard irq affinity
>>>
>>>Currently the cpu allowed mask for the threaded part of a threaded irq
>>>handler will be set to the effective affinity of the hard irq.
>>>
>>>Typically the effective affinity of the hard irq will be for a single cpu. As such,
>>>the threaded handler would always run on the same cpu as the hard irq.
>>>
>>>We have seen scenarios in high data-rate throughput testing that the cpu
>>>handling the interrupt can be totally saturated handling both the hard
>>>interrupt and threaded handler parts, limiting throughput.
>>>
>>>Add IRQF_IRQ_AFFINITY flag to allow the driver requesting the threaded
>>>interrupt to decide on the policy of which cpu the threaded handler may run.
>>>
>>>Signed-off-by: John Garry <john.garry@...wei.com>

Thanks for pointing me to this patch. This fixed the interrupt swamp and make the system stable.

However I'm seeing reduced performance when using threaded interrupts.

Here are the test results on a system with 80 CPUs and 10 NVMe disks (32 hardware queues for each disk)
Benchmark tool is FIO, I/O pattern: 4k random reads on all NVMe disks, with queue depth = 64, num of jobs = 80, direct=1

With threaded interrupts: 1320k IOPS
With just interrupts: 3720k IOPS
With just interrupts and my patch: 3700k IOPS

At the peak IOPS, the overall CPU usage is at around 98-99%. I think the cost of doing wake up and context switch for NVMe threaded IRQ handler takes some CPU away.

In this test, I made the following change to make use of IRQF_IRQ_AFFINITY for NVMe:

diff --git a/drivers/pci/irq.c b/drivers/pci/irq.c
index a1de501a2729..3fb30d16464e 100644
--- a/drivers/pci/irq.c
+++ b/drivers/pci/irq.c
@@ -86,7 +86,7 @@ int pci_request_irq(struct pci_dev *dev, unsigned int nr, irq_handler_t handler,
        va_list ap;
        int ret;
        char *devname;
-       unsigned long irqflags = IRQF_SHARED;
+       unsigned long irqflags = IRQF_SHARED | IRQF_IRQ_AFFINITY;

        if (!handler)
                irqflags |= IRQF_ONESHOT;

Thanks

Long

>>>
>>>diff --git a/include/linux/interrupt.h b/include/linux/interrupt.h index
>>>5b8328a99b2a..48e8b955989a 100644
>>>--- a/include/linux/interrupt.h
>>>+++ b/include/linux/interrupt.h
>>>@@ -61,6 +61,9 @@
>>>   *                interrupt handler after suspending interrupts. For
>>>system
>>>   *                wakeup devices users need to implement wakeup
>>>detection in
>>>   *                their interrupt handlers.
>>>+ * IRQF_IRQ_AFFINITY - Use the hard interrupt affinity for setting the cpu
>>>+ *                allowed mask for the threaded handler of a threaded
>>>interrupt
>>>+ *                handler, rather than the effective hard irq affinity.
>>>   */
>>>  #define IRQF_SHARED		0x00000080
>>>  #define IRQF_PROBE_SHARED	0x00000100
>>>@@ -74,6 +77,7 @@
>>>  #define IRQF_NO_THREAD		0x00010000
>>>  #define IRQF_EARLY_RESUME	0x00020000
>>>  #define IRQF_COND_SUSPEND	0x00040000
>>>+#define IRQF_IRQ_AFFINITY	0x00080000
>>>
>>>  #define IRQF_TIMER		(__IRQF_TIMER | IRQF_NO_SUSPEND |
>>>IRQF_NO_THREAD)
>>>
>>>diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c index
>>>e8f7f179bf77..cb483a055512 100644
>>>--- a/kernel/irq/manage.c
>>>+++ b/kernel/irq/manage.c
>>>@@ -966,9 +966,13 @@ irq_thread_check_affinity(struct irq_desc *desc,
>>>struct irqaction *action)
>>>  	 * mask pointer. For CPU_MASK_OFFSTACK=n this is optimized out.
>>>  	 */
>>>  	if (cpumask_available(desc->irq_common_data.affinity)) {
>>>+		struct irq_data *irq_data = &desc->irq_data;
>>>  		const struct cpumask *m;
>>>
>>>-		m = irq_data_get_effective_affinity_mask(&desc-
>>>>irq_data);
>>>+		if (action->flags & IRQF_IRQ_AFFINITY)
>>>+			m = desc->irq_common_data.affinity;
>>>+		else
>>>+			m = irq_data_get_effective_affinity_mask(irq_data);
>>>  		cpumask_copy(mask, m);
>>>  	} else {
>>>  		valid = false;
>>>--
>>>2.17.1
>>>
>>>As Ming mentioned in that same thread, we could even make this policy for
>>>managed interrupts.
>>>
>>>Cheers,
>>>John
>>>
>>>>
>>>> Thanks,
>>>> Ming Lei
>>>>
>>>> .
>>>>
>>>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ