[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <DF4PR84MB01695445B0BE8A046742942DAB160@DF4PR84MB0169.NAMPRD84.PROD.OUTLOOK.COM>
Date: Fri, 19 Aug 2016 21:27:52 +0000
From: "Elliott, Robert (Persistent Memory)" <elliott@....com>
To: Sreekanth Reddy <sreekanth.reddy@...adcom.com>
CC: "linux-scsi@...r.kernel.org" <linux-scsi@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"irqbalance@...ts.infradead.org" <irqbalance@...ts.infradead.org>,
"Kashyap Desai" <kashyap.desai@...adcom.com>,
Sathya Prakash Veerichetty <sathya.prakash@...adcom.com>,
Chaitra Basappa <chaitra.basappa@...adcom.com>,
Suganath Prabu Subramani
<suganath-prabu.subramani@...adcom.com>
Subject: RE: Observing Softlockup's while running heavy IOs
> -----Original Message-----
> From: Sreekanth Reddy [mailto:sreekanth.reddy@...adcom.com]
> Sent: Friday, August 19, 2016 6:45 AM
> To: Elliott, Robert (Persistent Memory) <elliott@....com>
> Subject: Re: Observing Softlockup's while running heavy IOs
>
...
> Yes I am also observing that all the interrupts are routed to one
> CPU. But still I observing softlockups (sometime hardlockups)
> even when I set rq_affinity to 2.
That'll ensure the block layer's completion handling is done there,
but not your driver's interrupt handler (which precedes the block
layer completion handling).
> Is their any way to route the interrupts the same CPUs which has
> submitted the corresponding IOs?
> or
> Is their any way/option in the irqbalance/kernel which can route
> interrupts to CPUs (enabled in affinity_hint) in round robin manner
> after specific time period.
Ensure your driver creates one MSIX interrupt per CPU core, uses
that interrupt for all submissions from that core, and reports
that it would like that interrupt to be serviced by that core
in /proc/irq/nnn/affinity_hint.
Even with hyperthreading, this needs to be based on the logical
CPU cores, not just the physical core or the physical socket.
You can swamp a logical CPU core as easily as a physical CPU core.
Then, provide an irqbalance policy script that honors the
affinity_hint for your driver, or turn off irqbalance and
manually set /proc/irq/nnn/smp_affinity to match the
affinity_hint.
Some versions of irqbalance honor the hints; some purposely
don't and need to be overridden with a policy script.
---
Robert Elliott, HPE Persistent Memory
Powered by blists - more mailing lists