[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <205710c7-774c-0f9d-c336-72da56844646@compro.net>
Date: Mon, 22 Aug 2016 12:35:44 -0400
From: Mark Hounschell <markh@...pro.net>
To: paulmck@...ux.vnet.ibm.com
Cc: GeHao Kang <kanghao0928@...il.com>,
Peter Zijlstra <peterz@...radead.org>,
Chris Metcalf <cmetcalf@...lanox.com>,
Frédéric Weisbecker <fweisbec@...il.com>,
linux-api@...r.kernel.org, linux-kernel@...r.kernel.org,
tglx@...utronix.de, mingo@...nel.org
Subject: Re: Context switch latency in tickless isolated CPU
On 08/22/2016 11:37 AM, Paul E. McKenney wrote:
> On Mon, Aug 22, 2016 at 11:12:45AM -0400, Mark Hounschell wrote:
>> On 08/22/2016 10:48 AM, Paul E. McKenney wrote:
>>> On Mon, Aug 22, 2016 at 05:40:03PM +0800, GeHao Kang wrote:
>>>> On Sun, Aug 21, 2016 at 10:53 PM, Paul E. McKenney
>>>> <paulmck@...ux.vnet.ibm.com> wrote:
>>>>> If latency is all you care about, one approach is to map the device
>>>>> registers into userspace and do the I/O without assistance from the
>>>>> kernel.
>>>> In addition to the context switch latency, local interrupts are also
>>>> closed during
>>>> user_enter and user_exit of the context tracking. Therefore, the interrupt
>>>> latency might be also increased on the isolated tickless CPU. That
>>>> will degrade the
>>>> real time performance. Are these two events determined?
>>>
>>> Hmmm... Why would you be taking interrupts on your isolated tickless
>>> CPUs? Doesn't that defeat the purpose of designating them as isolated
>>> and tickless?
>>
>> Don't mean to butt in here but think about a "special" PCI card that
>> does nothing but take an external interrupt or external interrupts
>> from an outside source where the latency between the time it occurs
>> on the outside and the time an isolated processor can act on that
>> event. The IRQ of that card also being pinned/isolated to that
>> processor. This is a very common thing in the RT world.
>
> In this case, the host OS would see an event-driven real-time workload
> from the PCI card, which would lead me to suggest -not- using NO_HZ_FULL
> on the host OS.
>
> Of course, if you are instead building an OS to run on the PCI card
> itself, then the choice of configuration would depend on how the PCI
> card was set up. If it polled hardware, then NO_HZ_FULL on the PCI card
> might work quite well. But then you wouldn't have interrupts (on the
> PCI card), so I am guessing that you mean the scenario covered in the
> first paragraph.
>
> Or am I missing your point?
>
> Thanx, Paul
>
First paragraph scenario is the one I was referring.
Thanks
Mark
>> Mark
>>
>>> The key point being that effective use of NO_HZ_FULL requires
>>> careful configuration and complete understanding of your workload.
>>> And it is quite possible that you instead need to use something
>>> other than NO_HZ_FULL.
>>>
>>> If your question is instead "why must interrupts be disabled during
>>> context tracking", I must defer to people who understand the x86
>>> entry/exit code paths better than I do.
>>>
>>> Thanx, Paul
>>>
>>>
>>
>
>
Powered by blists - more mailing lists