lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 11 Jun 2013 13:32:56 -0700
From:	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To:	Steven Rostedt <rostedt@...dmis.org>
Cc:	Waiman Long <waiman.long@...com>, linux-kernel@...r.kernel.org,
	mingo@...e.hu, laijs@...fujitsu.com, dipankar@...ibm.com,
	akpm@...ux-foundation.org, mathieu.desnoyers@...icios.com,
	josh@...htriplett.org, niv@...ibm.com, tglx@...utronix.de,
	peterz@...radead.org, Valdis.Kletnieks@...edu, dhowells@...hat.com,
	edumazet@...gle.com, darren@...art.com, fweisbec@...il.com,
	sbw@....edu, torvalds@...ux-foundation.org,
	Davidlohr Bueso <davidlohr.bueso@...com>
Subject: Re: [PATCH RFC ticketlock] Auto-queued ticketlock

On Tue, Jun 11, 2013 at 04:09:56PM -0400, Steven Rostedt wrote:
> On Tue, 2013-06-11 at 12:49 -0700, Paul E. McKenney wrote:
> 
> > +bool tkt_spin_pass(arch_spinlock_t *ap, struct __raw_tickets inc)
> > +{
> > +	if (unlikely(inc.head & 0x1)) {
> > +
> > +		/* This lock has a queue, so go spin on the queue. */
> > +		if (tkt_q_do_spin(ap, inc))
> > +			return true;
> > +
> > +		/* Get here if the queue is in transition: Retry next time. */
> > +
> 
> This looks better, but please add a comment, something to the likes of:
> 
> 	/*
> 	 * Only the TKT_Q_SWITCH waiter will set up the queue to prevent
> 	 * a thundering herd of setups to occur. It is still possible for
> 	 * more than one task to perform a setup if the lock is released
> 	 * after this check, a waiter coming in may also match this test. But
> 	 * that's covered by the cmpxchg() setup in tkt_q_start_contend.
> 	 */
> 
> > +	} else if (inc.tail - TKT_Q_SWITCH == inc.head) {
> 
> Also shouldn't this be:
> 
> 	} else if ((__ticket_t)(inc.tail - TKT_Q_SWITCH) == inc.head) {

Good points on the comment, here is what I currently have:

	} else if (inc.tail - TKT_Q_SWITCH == inc.head) {

		/*
		 * This lock has lots of spinners, but no queue.  Go create
		 * a queue to spin on.
		 *
		 * In the common case, only the single task that
		 * sees the head and tail tickets being different by
		 * exactly TKT_Q_SWITCH will come here set up the queue,
		 * which prevents a "thundering herd" of queue setups.
		 * Although it is still possible for an unfortunate series
		 * of lock handoffs and newly arrived tasks to result
		 * in more than one task performing a queue setup, this
		 * is unlikely.  Of course, this situation must still be
		 * handled correctly, which is the job of the cmpxchg()
		 * in tkt_q_start_contend().
		 */
		if (tkt_q_start_contend(ap, inc))
			return true;

Does that help?

> As TKT_Q_SWITCH doesn't have a type, I'm not sure how C will evaluate
> this. I always screw type conversions up, and just add in the type casts
> to be safe.
> 
> You could also give TKT_Q_SWITCH a type too.

This is an excellent point as well -- things might well get confused.
My solution was to take your last suggestion and given TKT_Q_SWITCH the
same type as inc.tail and inc.head, and also apply type-safety paranoia
to TKT_Q_NQUEUES:

/*
 * TKT_Q_SWITCH is twice the number of CPUs that must be spinning on a
 * given ticket lock to motivate switching to spinning on a queue.
 * The reason that it is twice the number is because the bottom bit of
 * the ticket is reserved for the bit that indicates that a queue is
 * associated with the lock.
 */
#define TKT_Q_SWITCH  ((__ticket_t)(CONFIG_TICKET_LOCK_QUEUED_SWITCH * 2))

/*
 * TKT_Q_NQUEUES is the number of queues to maintain.  Large systems
 * might have multiple highly contended locks, so provide more queues for
 * systems with larger numbers of CPUs.
 */
#define TKT_Q_NQUEUES (2 * DIV_ROUND_UP(NR_CPUS + ((int)TKT_Q_SWITCH) - 1, \
					(int)TKT_Q_SWITCH))

Does that look OK?  (The limits on the value of TKT_Q_SWITCH should avoid
signed integer overflow.)

							Thanx, Paul

> -- Steve
> 
> > +
> > +		/*
> > +		 * This lock has lots of spinners, but no queue.
> > +		 * Go create a queue to spin on.
> > +		 */
> > +		if (tkt_q_start_contend(ap, inc))
> > +			return true;
> > +
> > +		/* Get here if the queue is in transition: Retry next time. */
> > +	}
> > +
> > +	/* Either no need for a queue or the queue is in transition.  Spin. */
> > +	return false;
> > +}
> > +EXPORT_SYMBOL(tkt_spin_pass);
> 
> 

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ