lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YHWIezK9pbmbWxsu@hirez.programming.kicks-ass.net>
Date:   Tue, 13 Apr 2021 14:03:07 +0200
From:   Peter Zijlstra <peterz@...radead.org>
To:     Alex Kogan <alex.kogan@...cle.com>
Cc:     linux@...linux.org.uk, mingo@...hat.com, will.deacon@....com,
        arnd@...db.de, longman@...hat.com, linux-arch@...r.kernel.org,
        linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
        tglx@...utronix.de, bp@...en8.de, hpa@...or.com, x86@...nel.org,
        guohanjun@...wei.com, jglauber@...vell.com,
        steven.sistare@...cle.com, daniel.m.jordan@...cle.com,
        dave.dice@...cle.com
Subject: Re: [PATCH v14 4/6] locking/qspinlock: Introduce starvation
 avoidance into CNA

On Thu, Apr 01, 2021 at 11:31:54AM -0400, Alex Kogan wrote:

> @@ -49,13 +55,33 @@ struct cna_node {
>  	u16			real_numa_node;
>  	u32			encoded_tail;	/* self */
>  	u32			partial_order;	/* enum val */
> +	s32			start_time;
>  };

> +/*
> + * Controls the threshold time in ms (default = 10) for intra-node lock
> + * hand-offs before the NUMA-aware variant of spinlock is forced to be
> + * passed to a thread on another NUMA node. The default setting can be
> + * changed with the "numa_spinlock_threshold" boot option.
> + */
> +#define MSECS_TO_JIFFIES(m)	\
> +	(((m) + (MSEC_PER_SEC / HZ) - 1) / (MSEC_PER_SEC / HZ))
> +static int intra_node_handoff_threshold __ro_after_init = MSECS_TO_JIFFIES(10);
> +
> +static inline bool intra_node_threshold_reached(struct cna_node *cn)
> +{
> +	s32 current_time = (s32)jiffies;
> +	s32 threshold = cn->start_time + intra_node_handoff_threshold;
> +
> +	return current_time - threshold > 0;
> +}

None of this makes any sense:

 - why do you track time elapsed as a signed entity?
 - why are you using jiffies; that's terrible granularity.

As Andi already said, 10ms is silly large. You've just inflated the
lock-acquire time for every contended lock to stupid land just because
NUMA.

And this also brings me to the whole premise of this series; *why* are
we optimizing this? What locks are so contended that this actually helps
and shouldn't you be spending your time breaking those locks? That would
improve throughput more than this ever can.

>  static void __init cna_init_nodes_per_cpu(unsigned int cpu)
>  {
>  	struct mcs_spinlock *base = per_cpu_ptr(&qnodes[0].mcs, cpu);

> @@ -250,11 +284,17 @@ static void cna_order_queue(struct mcs_spinlock *node)
>  static __always_inline u32 cna_wait_head_or_lock(struct qspinlock *lock,
>  						 struct mcs_spinlock *node)
>  {
> -	/*
> -	 * Try and put the time otherwise spent spin waiting on
> -	 * _Q_LOCKED_PENDING_MASK to use by sorting our lists.
> -	 */
> -	cna_order_queue(node);
> +	struct cna_node *cn = (struct cna_node *)node;
> +
> +	if (!cn->start_time || !intra_node_threshold_reached(cn)) {
> +		/*
> +		 * Try and put the time otherwise spent spin waiting on
> +		 * _Q_LOCKED_PENDING_MASK to use by sorting our lists.
> +		 */
> +		cna_order_queue(node);
> +	} else {
> +		cn->partial_order = FLUSH_SECONDARY_QUEUE;

This is even worse naming..

> +	}
>  
>  	return 0; /* we lied; we didn't wait, go do so now */
>  }

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ