lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 20 Oct 2008 19:57:23 -0500
From:	"David M. Lloyd" <dmlloyd@...rg.com>
To:	david@...g.hm
Cc:	Arnaldo Carvalho de Melo <acme@...hat.com>,
	linux-kernel <linux-kernel@...r.kernel.org>
Subject: Re: sched_yield() options

On 10/20/2008 07:44 PM, david@...g.hm wrote:
> On Mon, 20 Oct 2008, David M. Lloyd wrote:
> 
>> On 10/20/2008 06:08 PM, david@...g.hm wrote:
>>> in the case I'm looking at there are two (or more) threads running 
>>> with one message queue in the center.
>>>
>>> 'input threads' are grabbing the lock to add messages to the queue
>>>
>>> 'output threads' are grabbing the lock to remove messages from the queue
>>>
>>> the programmer is doing a pthread_yield() after each message is 
>>> processed in an attempt to help fairness (he initially added it in 
>>> when he started seeing starvation on single-core systems)
>>>
>>> what should he be doing instead?
>>
>> If you're seeing starvation, to me that's a good indicator that the 
>> granularity of queue items are too small... probably there'd be an 
>> overall benefit of grabbing more things at once from the queue.
>> <...>
> I've suggested that, but the changes nessasary to support that mode of 
> operation are very invasive, and so not an option in the near/medium term.
> 
> in the meantime is there something better than sched_yield() that should 
> be happening
> <...>
> the sched_yield is an attempt to have the secretary pause once in a 
> while and check to see if the other line has someone waiting.
> 
> from looking at the software running, it doesn't seem to work very well. 
> I've also suggested investigating lockless algorithms for the queue, but 
> that is also a lot of high-risk (but high-reward) work. what else can be 
> done to make a mutex more fair?

No, you're not going to make much progress trying to fix the wrong problem 
in my opinion.  A lockless algorithm *might* work, but I suspect that since 
your computation units are apparently so small, you'll still spend a lot of 
time doing compare-and-swap and barriers and that sort of thing anyway, and 
it will still be the same sort of situation.  I think your design is 
basically broken.  You're frankly probably better off just ditching the 
queue and doing the work directly in the queuing threads.  At least then 
you won't have contention.

- DML
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ