[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CY4PR21MB077370DED982AAC7B40B884CD7C40@CY4PR21MB0773.namprd21.prod.outlook.com>
Date: Wed, 7 Nov 2018 22:42:27 +0000
From: Michael Kelley <mikelley@...rosoft.com>
To: Thomas Gleixner <tglx@...utronix.de>
CC: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"mingo@...nel.org" <mingo@...nel.org>,
"hpa@...or.com" <hpa@...or.com>, Long Li <longli@...rosoft.com>
Subject: RE: [tip:irq/core] genirq/matrix: Improve target CPU selection for
managed interrupts.
From: Thomas Gleixner <tglx@...utronix.de> Sent: Wednesday, November 7, 2018 12:23 PM
>
> There is another interesting property of managed interrupts vs. CPU
> hotplug. When the last CPU in the affinity mask goes offline, then the core
> code shuts down the interrupt and the device driver and related layers
> exclude the associated device queue from I/O. The same applies for CPUs
> which are not online when the device is initialized, i.e. if non of the
> CPUs is online then the interrupt is not started and the I/O queue stays
> disabled.
>
> When the first CPU in the mask comes online (again), then the interrupt is
> reenabled and the device driver and related layers reenable I/O on the
> associated device queue.
>
Thanks! The transition into and out of the situation when none of the CPUs
in the affinity mask are online is what I wasn't aware of. With that piece of
the puzzle, it all makes sense.
Michael
Powered by blists - more mailing lists