[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4210fcce-7ed1-8f93-a0f0-0fc588792fee@huawei.com>
Date: Thu, 13 Jun 2019 17:53:39 +0800
From: Hanjun Guo <guohanjun@...wei.com>
To: Jayachandran Chandrasekharan Nair <jnair@...vell.com>,
Will Deacon <will.deacon@....com>
CC: Ard Biesheuvel <ard.biesheuvel@...aro.org>,
"catalin.marinas@....com" <catalin.marinas@....com>,
Jan Glauber <jglauber@...vell.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
"linux-arm-kernel@...ts.infradead.org"
<linux-arm-kernel@...ts.infradead.org>
Subject: Re: [RFC] Disable lockref on arm64
On 2019/6/12 12:10, Jayachandran Chandrasekharan Nair wrote:
> On Wed, May 22, 2019 at 05:04:17PM +0100, Will Deacon wrote:
>> On Sat, May 18, 2019 at 12:00:34PM +0200, Ard Biesheuvel wrote:
>>> On Sat, 18 May 2019 at 06:25, Jayachandran Chandrasekharan Nair
>>> <jnair@...vell.com> wrote:
>>>>
>>>> On Mon, May 06, 2019 at 07:10:40PM +0100, Will Deacon wrote:
>>>>> On Mon, May 06, 2019 at 06:13:12AM +0000, Jayachandran Chandrasekharan Nair wrote:
>>>>>> Perhaps someone from ARM can chime in here how the cas/yield combo
>>>>>> is expected to work when there is contention. ThunderX2 does not
>>>>>> do much with the yield, but I don't expect any ARM implementation
>>>>>> to treat YIELD as a hint not to yield, but to get/keep exclusive
>>>>>> access to the last failed CAS location.
>>>>>
>>>>> Just picking up on this as "someone from ARM".
>>>>>
>>>>> The yield instruction in our implementation of cpu_relax() is *only* there
>>>>> as a scheduling hint to QEMU so that it can treat it as an internal
>>>>> scheduling hint and run some other thread; see 1baa82f48030 ("arm64:
>>>>> Implement cpu_relax as yield"). We can't use WFE or WFI blindly here, as it
>>>>> could be a long time before we see a wake-up event such as an interrupt. Our
>>>>> implementation of smp_cond_load_acquire() is much better for that kind of
>>>>> thing, but doesn't help at all for a contended CAS loop where the variable
>>>>> is actually changing constantly.
>>>>
>>>> Looking thru the perf output of this case (open/close of a file from
>>>> multiple CPUs), I see that refcount is a significant factor in most
>>>> kernel configurations - and that too uses cmpxchg (without yield).
>>>> x86 has an optimized inline version of refcount that helps
>>>> significantly. Do you think this is worth looking at for arm64?
>>>>
>>>
>>> I looked into this a while ago [0], but at the time, we decided to
>>> stick with the generic implementation until we encountered a use case
>>> that benefits from it. Worth a try, I suppose ...
>>>
>>> [0] https://lore.kernel.org/linux-arm-kernel/20170903101622.12093-1-ard.biesheuvel@linaro.org/
>>
>> If JC can show that we benefit from this, it would be interesting to see if
>> we can implement the refcount-full saturating arithmetic using the
>> LDMIN/LDMAX instructions instead of the current cmpxchg() loops.
>
> Now that the lockref change is mainline, I think we need to take another
> look at this patch.
>
> Using a fixed up version of Ard's patch above along with Jan's lockref
> change upstream, I get significant improvement in scaling for my file
> open/read/close testcase[1]. Like I wrote earlier, if I take a
> standard Ubuntu arm64 kernel configuration, most of the time for my
> test[1] is spent in refcount operations.
>
> With Ard's changes applied[2], I see that the lockref CAS code becomes
> the top function and then the retry limit will kick in as expected. In
> my testcase, I see that the queued spinlock case is about 2.5 times
> faster than the unbound CAS loop when 224 CPUs are enabled (SMT 4,
> 28core, 2socket).
>
> JC
>
> [1] https://github.com/jchandra-cavm/refcount-test
> [2] https://github.com/jchandra-cavm/linux/commits/refcount-fixes
FWIW, with the patch (Ard's patch plus fixes) above, running the
same testcase on ARM64 Kunpeng920 96 CPU core system, I can see about 50%
performance boost.
I also tested Jan's lockref change without Ard's patch, performance
is almost the same.
Thanks
Hanjun
Powered by blists - more mailing lists