lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <51B75FF3.6040808@hp.com>
Date:	Tue, 11 Jun 2013 13:35:47 -0400
From:	Waiman Long <waiman.long@...com>
To:	Steven Rostedt <rostedt@...dmis.org>
CC:	paulmck@...ux.vnet.ibm.com, linux-kernel@...r.kernel.org,
	mingo@...e.hu, laijs@...fujitsu.com, dipankar@...ibm.com,
	akpm@...ux-foundation.org, mathieu.desnoyers@...icios.com,
	josh@...htriplett.org, niv@...ibm.com, tglx@...utronix.de,
	peterz@...radead.org, Valdis.Kletnieks@...edu, dhowells@...hat.com,
	edumazet@...gle.com, darren@...art.com, fweisbec@...il.com,
	sbw@....edu, torvalds@...ux-foundation.org
Subject: Re: [PATCH RFC ticketlock] Auto-queued ticketlock

On 06/11/2013 12:20 PM, Steven Rostedt wrote:
>>> diff --git a/arch/x86/include/asm/spinlock_types.h b/arch/x86/include/asm/spinlock_types.h
>>> index ad0ad07..cdaefdd 100644
>>> --- a/arch/x86/include/asm/spinlock_types.h
>>> +++ b/arch/x86/include/asm/spinlock_types.h
>>> @@ -7,12 +7,18 @@
>>>
>>>    #include<linux/types.h>
>>>
>>> -#if (CONFIG_NR_CPUS<   256)
>>> +#if (CONFIG_NR_CPUS<   128)
>>>    typedef u8  __ticket_t;
>>>    typedef u16 __ticketpair_t;
>>> -#else
>>> +#define TICKET_T_CMP_GE(a, b) (UCHAR_MAX / 2>= (unsigned char)((a) - (b)))
>>> +#elif (CONFIG_NR_CPUS<   32768)
>>>    typedef u16 __ticket_t;
>>>    typedef u32 __ticketpair_t;
>>> +#define TICKET_T_CMP_GE(a, b) (USHRT_MAX / 2>= (unsigned short)((a) - (b)))
>>> +#else
>>> +typedef u32 __ticket_t;
>>> +typedef u64 __ticketpair_t;
>>> +#define TICKET_T_CMP_GE(a, b) (UINT_MAX / 2>= (unsigned int)((a) - (b)))
>>>    #endif
>> It is theoretically possible that a large number of CPUs (says 64 or
>> more with CONFIG_NR_CPUS<  128) can acquire a ticket from the lock
>> before the check for TICKET_T_CMP_GE() in tkt_spin_pass(). So the check
>> will fail even when there is a large number of CPUs contending for the
>> lock. The chance of this happening is, of course, extremely rare. This
>> is not an error as the lock is still working as it should be without
>> your change.
> Can you explain this more. How can you acquire the ticket and update at
> the same time? If a queue has been set, then you can't acquire the
> ticket as the head has a 1 appended to it.

I am sorry if I confuse you. What I meant is queuing up at the tail of 
the ticket lock incrementing the tail number, not actually getting the lock.

>>
>>> +/*
>>> + * Return a pointer to the queue header associated with the specified lock,
>>> + * or return NULL if there is no queue for the lock or if the lock's queue
>>> + * is in transition.
>>> + */
>>> +static struct tkt_q_head *tkt_q_find_head(arch_spinlock_t *asp)
>>> +{
>>> +	int i;
>>> +	int start;
>>> +
>>> +	start = i = tkt_q_hash(asp);
>>> +	do
>>> +		if (tkt_q_heads[i].ref == asp)
>>> +			return&tkt_q_heads[i];
>>> +	while ((i = tkt_q_next_slot(i)) != start);
>>> +	return NULL;
>>> +}
>> With a table size of 256 and you have to scan the whole table to find
>> the right head queue. This can be a significant overhead. I will suggest
>> setting a limiting of how many entries it scans before it aborts rather
>> than checking the whole table.
> We could add a limit, but in practice I'm not sure that would have any
> issue. I thought the same thing when I first saw this, but to hit most
> of the list, would require a large collision in the hash algorithm,
> would could probably be fixed with a better hash.

The current code will scan the whole table until either it gets a match 
or whole table scan is completed. I first thought that hitting a NULL 
entry can stop the search, but that is not true. It is entirely possible 
that an entry was used when a queue is created but become empty 
immediately after that. So we have to scan the whole table to be sure or 
unless we impose a limit on how many entries we scan.

Regards,
Longman
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ