lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1B2B46E9-651E-4BA5-988A-924AE3E72C00@oracle.com>
Date:   Thu, 23 Jan 2020 18:39:36 -0500
From:   Alex Kogan <alex.kogan@...cle.com>
To:     Waiman Long <longman@...hat.com>
Cc:     linux@...linux.org.uk, peterz@...radead.org, mingo@...hat.com,
        will.deacon@....com, arnd@...db.de, linux-arch@...r.kernel.org,
        linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
        tglx@...utronix.de, bp@...en8.de, hpa@...or.com, x86@...nel.org,
        guohanjun@...wei.com, jglauber@...vell.com,
        steven.sistare@...cle.com, daniel.m.jordan@...cle.com,
        dave.dice@...cle.com
Subject: Re: [PATCH v9 4/5] locking/qspinlock: Introduce starvation avoidance
 into CNA

> On Jan 23, 2020, at 3:39 PM, Waiman Long <longman@...hat.com> wrote:
> 
> On 1/23/20 2:55 PM, Waiman Long wrote:
>> Playing with lock event counts, I would like you to change the meaning
>> intra_count parameter that you are tracking. Instead of tracking the
>> number of times a lock is passed to a waiter of the same node
>> consecutively, I would like you to track the number of times the head
>> waiter in the secondary queue has given up its chance to acquire the
>> lock because a later waiter has jumped the queue and acquire the lock
>> before it.
Isn’t that the same thing? Note that we keep track of the number of 
intra-node lock transfers only when the secondary queue is not empty.

>> This value determines the worst case latency that a secondary
>> queue waiter can experience. So
> 
> Well, that is not strictly true as a a waiter in the middle of the
> secondary queue can go back and fro between the queues for a number of
> times. Of course, if we can ensure that when a FLUSH_SECONDARY_QUEUE is
> issued, those waiters that were in the secondary queue won't be put back
> into the secondary queue again.
This will not work as intended when we have more than 2 nodes. That is, if we
have threads from node A & B in the secondary queue, and then the queue
is flushed and its head (say, from node A) gets the lock, we want to push 
threads from node B back into the secondary queue, to keep the lock on node A.

And if we have only 2 nodes, a waiter in the middle of the secondary queue will 
never go back into the secondary queue, even if the threshold is small. 
This is because we flush the secondary queue by putting all its waiters in
the front of the main queue, and the secondary queue will remain empty at least
until we reach a thread from another node.

Regards,
— Alex

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ