lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <02D4688A-FB4C-4100-8B85-C915F130BB99@oracle.com>
Date:   Thu, 15 Apr 2021 22:52:57 -0400
From:   Alex Kogan <alex.kogan@...cle.com>
To:     Peter Zijlstra <peterz@...radead.org>
Cc:     linux@...linux.org.uk, Ingo Molnar <mingo@...hat.com>,
        Will Deacon <will.deacon@....com>,
        Arnd Bergmann <arnd@...db.de>, longman@...hat.com,
        linux-arch@...r.kernel.org, linux-arm-kernel@...ts.infradead.org,
        linux-kernel@...r.kernel.org, tglx@...utronix.de, bp@...en8.de,
        hpa@...or.com, x86@...nel.org, guohanjun@...wei.com,
        jglauber@...vell.com, steven.sistare@...cle.com,
        daniel.m.jordan@...cle.com, dave.dice@...cle.com
Subject: Re: [External] : Re: [PATCH v14 4/6] locking/qspinlock: Introduce
 starvation avoidance into CNA



> On Apr 13, 2021, at 8:03 AM, Peter Zijlstra <peterz@...radead.org> wrote:
> 
> On Thu, Apr 01, 2021 at 11:31:54AM -0400, Alex Kogan wrote:
> 
>> @@ -49,13 +55,33 @@ struct cna_node {
>> 	u16			real_numa_node;
>> 	u32			encoded_tail;	/* self */
>> 	u32			partial_order;	/* enum val */
>> +	s32			start_time;
>> };
> 
>> +/*
>> + * Controls the threshold time in ms (default = 10) for intra-node lock
>> + * hand-offs before the NUMA-aware variant of spinlock is forced to be
>> + * passed to a thread on another NUMA node. The default setting can be
>> + * changed with the "numa_spinlock_threshold" boot option.
>> + */
>> +#define MSECS_TO_JIFFIES(m)	\
>> +	(((m) + (MSEC_PER_SEC / HZ) - 1) / (MSEC_PER_SEC / HZ))
>> +static int intra_node_handoff_threshold __ro_after_init = MSECS_TO_JIFFIES(10);
>> +
>> +static inline bool intra_node_threshold_reached(struct cna_node *cn)
>> +{
>> +	s32 current_time = (s32)jiffies;
>> +	s32 threshold = cn->start_time + intra_node_handoff_threshold;
>> +
>> +	return current_time - threshold > 0;
>> +}
> 
> None of this makes any sense:
> 
> - why do you track time elapsed as a signed entity?
> - why are you using jiffies; that's terrible granularity.
Good points. I will address that (see below). I will just mention that 
those suggestions came from senior folks on this mailing list,
and it seemed prudent to take their counsel. 

> 
> As Andi already said, 10ms is silly large. You've just inflated the
> lock-acquire time for every contended lock to stupid land just because
> NUMA.
I just ran a few quick tests — local_clock() (a wrapper around sched_clock()) 
works well, so I will switch to using that.

I also took a few numbers with different thresholds. Looks like we can drop 
the threshold to 1ms with a minor penalty to performance. However, 
pushing the threshold to 100us has a more significant cost. Here are
the numbers for reference:

will-it-scale/lock2_threads:
threshold:                     10ms     1ms      100us
speedup at 142 threads:       2.184    1.974     1.1418 

will-it-scale/open1_threads:
threshold:                     10ms     1ms      100us
speedup at 142 threads:       2.146    1.974     1.291

Would you be more comfortable with setting the default at 1ms?

> And this also brings me to the whole premise of this series; *why* are
> we optimizing this? What locks are so contended that this actually helps
> and shouldn't you be spending your time breaking those locks? That would
> improve throughput more than this ever can.

I think for the same reason the kernel switched from ticket locks to queue locks
several years back. There always will be applications with contended locks. 
Sometimes the workarounds are easy, but many times they are not, like with 
legacy applications or when the workload is skewed (e.g., every client tries to
update the metadata of the same file protected by the same lock). The results
show that for those cases we leave > 2x performance on the table. Those are not
only our numbers — LKP reports show similar or even better results, 
on a wide range of benchmarks, e.g.:
https://lists.01.org/hyperkitty/list/lkp@lists.01.org/thread/HGVOCYDEE5KTLYPTAFBD2RXDQOCDPFUJ/
https://lists.01.org/hyperkitty/list/lkp@lists.01.org/thread/OUPS7MZ3GJA2XYWM52GMU7H7EI25IT37/
https://lists.01.org/hyperkitty/list/lkp@lists.01.org/thread/DNMEQPXJRQY2IKHZ3ERGRY6TUPWDTFUN/

Regards,
— Alex

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ