lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4e15fa1d-9540-3274-502a-4195a0d46f63@redhat.com>
Date:   Wed, 22 Jan 2020 12:24:58 -0500
From:   Waiman Long <longman@...hat.com>
To:     Lihao Liang <lihaoliang@...gle.com>,
        Alex Kogan <alex.kogan@...cle.com>
Cc:     linux@...linux.org.uk, peterz@...radead.org, mingo@...hat.com,
        will.deacon@....com, arnd@...db.de, linux-arch@...r.kernel.org,
        linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
        tglx@...utronix.de, bp@...en8.de, hpa@...or.com, x86@...nel.org,
        guohanjun@...wei.com, jglauber@...vell.com, dave.dice@...cle.com,
        steven.sistare@...cle.com, daniel.m.jordan@...cle.com,
        Will Deacon <will@...nel.org>
Subject: Re: [PATCH v9 0/5] Add NUMA-awareness to qspinlock

On 1/22/20 6:45 AM, Lihao Liang wrote:
> Hi Alex,
>
> On Wed, Jan 22, 2020 at 10:28 AM Alex Kogan <alex.kogan@...cle.com> wrote:
>> Summary
>> -------
>>
>> Lock throughput can be increased by handing a lock to a waiter on the
>> same NUMA node as the lock holder, provided care is taken to avoid
>> starvation of waiters on other NUMA nodes. This patch introduces CNA
>> (compact NUMA-aware lock) as the slow path for qspinlock. It is
>> enabled through a configuration option (NUMA_AWARE_SPINLOCKS).
>>
> Thanks for your patches. The experimental results look promising!
>
> I understand that the new CNA qspinlock uses randomization to achieve
> long-term fairness, and provides the numa_spinlock_threshold parameter
> for users to tune. As Linux runs extremely diverse workloads, it is not
> clear how randomization affects its fairness, and how users with
> different requirements are supposed to tune this parameter.
>
> To this end, Will and I consider it beneficial to be able to answer the
> following question:
>
> With different values of numa_spinlock_threshold and
> SHUFFLE_REDUCTION_PROB_ARG, how long do threads running on different
> sockets have to wait to acquire the lock? This is particularly relevant
> in high contention situations when new threads keep arriving on the same
> socket as the lock holder.
>
> In this email, I try to provide some formal analysis to address this
> question. Let's assume the probability for the lock to stay on the
> same socket is *at least* p, which corresponds to the probability for
> the function probably(unsigned int num_bits) in the patch to return *false*,
> where SHUFFLE_REDUCTION_PROB_ARG is passed as the value of num_bits to the
> function.

That is not strictly true from my understanding of the code. The
probably() function does not come into play if a secondary queue is
present. Also calling cna_scan_main_queue() doesn't guarantee that a
waiter in the same node can be found. So the simple mathematical
analysis isn't that applicable in this case. One will have to do an
actual simulation to find out what the actual behavior will be.

The comment in the code states that:

/*
 * Controls the probability for enabling the scan of the main queue when
 * the secondary queue is empty. The chosen value reduces the amount of
 * unnecessary shuffling of threads between the two waiting queues when
 * the contention is low, while responding fast enough and enabling
 * the shuffling when the contention is high.
 */
#define SHUFFLE_REDUCTION_PROB_ARG  (7)

Cheers,
Longman



Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ