lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1370972187.9844.205.camel@gandalf.local.home>
Date:	Tue, 11 Jun 2013 13:36:27 -0400
From:	Steven Rostedt <rostedt@...dmis.org>
To:	paulmck@...ux.vnet.ibm.com
Cc:	linux-kernel@...r.kernel.org, mingo@...e.hu, laijs@...fujitsu.com,
	dipankar@...ibm.com, akpm@...ux-foundation.org,
	mathieu.desnoyers@...icios.com, josh@...htriplett.org,
	niv@...ibm.com, tglx@...utronix.de, peterz@...radead.org,
	Valdis.Kletnieks@...edu, dhowells@...hat.com, edumazet@...gle.com,
	darren@...art.com, fweisbec@...il.com, sbw@....edu,
	torvalds@...ux-foundation.org, walken@...gle.com,
	waiman.long@...com
Subject: Re: [PATCH RFC ticketlock] v2 Auto-queued ticketlock

On Tue, 2013-06-11 at 10:02 -0700, Paul E. McKenney wrote:

> +#ifdef CONFIG_TICKET_LOCK_QUEUED
> +
> +#define __TKT_SPIN_INC 2
> +bool tkt_spin_pass(arch_spinlock_t *ap, struct __raw_tickets inc);
> +
> +#else /* #ifdef CONFIG_TICKET_LOCK_QUEUED */
> +
> +#define __TKT_SPIN_INC 1
> +static inline bool tkt_spin_pass(arch_spinlock_t *ap, struct __raw_tickets inc)
> +{
> +	return false;
> +}
> +
> +#endif /* #else #ifdef CONFIG_TICKET_LOCK_QUEUED */
> +
>  /*
>   * Ticket locks are conceptually two parts, one indicating the current head of
>   * the queue, and the other indicating the current tail. The lock is acquired
> @@ -49,17 +64,15 @@
>   */
>  static __always_inline void __ticket_spin_lock(arch_spinlock_t *lock)
>  {
> -	register struct __raw_tickets inc = { .tail = 1 };
> +	register struct __raw_tickets inc = { .tail = __TKT_SPIN_INC };
>  
>  	inc = xadd(&lock->tickets, inc);
> -
>  	for (;;) {
> -		if (inc.head == inc.tail)
> +		if (inc.head == inc.tail || tkt_spin_pass(lock, inc))
>  			break;
> -		cpu_relax();

Overheating the CPU are we ;-)

Keeping the cpu_relax() doesn't hurt, even when TICKET_LOCK_QUEUE is
enabled. As the only latency to worry about is when tkt_spin_pass()
returns true, where it breaks out of the loop anyway.

But if you really don't want the double call to cpu_relax(), we can
probably remove the cpu_relax from tkt_spin_pass() and keep this one, or
in the above tkt_spin_pass() where TICK_LOCK_QUEUED is not set, we can
do:

static inline bool tkt_spin_pass(arch_spinlock_t *ap, struct
__raw_tickets inc)
{
	cpu_relax();
	return false;
}

Honesty, I would say remove it from tkt_spin_pass() when returning
false.

-- Steve


>  		inc.head = ACCESS_ONCE(lock->tickets.head);
>  	}
> -	barrier();		/* make sure nothing creeps before the lock is taken */
> +	barrier(); /* Make sure nothing creeps in before the lock is taken. */
>  }
>  


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ