[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <abf2dfe7-e148-b011-764d-b9effa573d5d@redhat.com>
Date: Fri, 18 Jan 2019 09:50:12 -0500
From: Waiman Long <longman@...hat.com>
To: Peter Zijlstra <peterz@...radead.org>,
James Morse <james.morse@....com>
Cc: Zhenzhong Duan <zhenzhong.duan@...cle.com>,
LKML <linux-kernel@...r.kernel.org>,
SRINIVAS <srinivas.eeda@...cle.com>,
Borislav Petkov <bp@...en8.de>,
Steven Rostedt <rostedt@...dmis.org>
Subject: Re: Question about qspinlock nest
On 01/18/2019 05:02 AM, Peter Zijlstra wrote:
>
>> e.g. We can't take an SError during the SError handler.
>>
>> But we can take this SError/NMI on another CPU while the first one is still
>> running the handler.
>>
>> These multiple NMIlike notifications mean having multiple locks/fixmap-slots,
>> one per notification. This is where the qspinlock node limit comes in, as we
>> could have more than 4 contexts.
> Right; so Waiman was going to do a patch that reverts to test-and-set or
> something along those lines once we hit the queue limit, which seems
> like a good way out. Actually hitting that nesting level should be
> exceedingly rare.
Yes, I am working on a patch to support arbitrary levels of nesting. It
is easy for PV qspinlock as lock stealing is supported.
For native qspinlock, we cannot do lock stealing without incurring a
certain amount of overhead in the regular slowpath code. It was up to
10% in my own testing. So I am exploring an alternative that can do the
job without incurring any noticeable performance degradation in the
slowpath. I ran into a race condition which I am still trying to find
out where that comes from. Hopefully, I will have something to post next
week.
Cheers,
Longman
Powered by blists - more mailing lists