lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <6AEDE4F2-306A-4DF9-9307-9E3517C68A2B@oracle.com>
Date:   Wed, 3 Apr 2019 11:39:09 -0400
From:   Alex Kogan <alex.kogan@...cle.com>
To:     Peter Zijlstra <peterz@...radead.org>
Cc:     Waiman Long <longman@...hat.com>, linux@...linux.org.uk,
        mingo@...hat.com, will.deacon@....com, arnd@...db.de,
        linux-arch@...r.kernel.org, linux-arm-kernel@...ts.infradead.org,
        linux-kernel@...r.kernel.org, tglx@...utronix.de, bp@...en8.de,
        hpa@...or.com, x86@...nel.org, steven.sistare@...cle.com,
        daniel.m.jordan@...cle.com, dave.dice@...cle.com,
        rahul.x.yadav@...cle.com
Subject: Re: [PATCH v2 3/5] locking/qspinlock: Introduce CNA into the slow
 path of qspinlock

Peter, Longman, many thanks for your detailed comments!

A few follow-up questions are inlined below.

> On Apr 2, 2019, at 5:43 AM, Peter Zijlstra <peterz@...radead.org> wrote:
> 
> On Mon, Apr 01, 2019 at 10:36:19AM -0400, Waiman Long wrote:
>> On 03/29/2019 11:20 AM, Alex Kogan wrote:
>>> +config NUMA_AWARE_SPINLOCKS
>>> +	bool "Numa-aware spinlocks"
>>> +	depends on NUMA
>>> +	default y
>>> +	help
>>> +	  Introduce NUMA (Non Uniform Memory Access) awareness into
>>> +	  the slow path of spinlocks.
>>> +
>>> +	  The kernel will try to keep the lock on the same node,
>>> +	  thus reducing the number of remote cache misses, while
>>> +	  trading some of the short term fairness for better performance.
>>> +
>>> +	  Say N if you want absolute first come first serve fairness.
>>> +
>> 
>> The patch that I am looking for is to have a separate
>> numa_queued_spinlock_slowpath() that coexists with
>> native_queued_spinlock_slowpath() and
>> paravirt_queued_spinlock_slowpath(). At boot time, we select the most
>> appropriate one for the system at hand.
Is this how this selection works today for paravirt?
I see a PARAVIRT_SPINLOCKS config option, but IIUC you are talking about a different mechanism here.
Can you, please, elaborate or give me a link to a page that explains that?

> 
> Agreed; and until we have static_call, I think we can abuse the paravirt
> stuff for this.
> 
> By the time we patch the paravirt stuff:
> 
>  check_bugs()
>    alternative_instructions()
>      apply_paravirt()
> 
> we should already have enumerated the NODE topology and so nr_node_ids()
> should be set.
> 
> So if we frob pv_ops.lock.queued_spin_lock_slowpath to
> numa_queued_spin_lock_slowpath before that, it should all get patched
> just right.
> 
> That of course means the whole NUMA_AWARE_SPINLOCKS thing depends on
> PARAVIRT_SPINLOCK, which is a bit awkward…
Just to mention here, the patch so far does not address paravirt, but our goal is to add this support once we address all the concerns for the native version.
So we will end up with four variants for the queued_spinlock_slowpath() — one for each combination of native/paravirt and NUMA/non-NUMA.
Or perhaps we do not need a NUMA/paravirt variant?

— Alex


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ