lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 26 Apr 2016 10:27:59 -0700
From:	Peter Hurley <peter@...leysoftware.com>
To:	Tejun Heo <tj@...nel.org>
Cc:	Roman Pen <roman.penyaev@...fitbricks.com>,
	Jens Axboe <axboe@...nel.dk>, linux-block@...r.kernel.org,
	linux-kernel@...r.kernel.org, David Howells <dhowells@...hat.com>
Subject: Re: [PATCH 1/1] [RFC] workqueue: fix ghost PENDING flag while doing
 MQ IO

Hi Tejun,

On 04/26/2016 08:15 AM, Tejun Heo wrote:
> Hello, Peter.
> 
> On Mon, Apr 25, 2016 at 06:22:01PM -0700, Peter Hurley wrote:
>> This is the same bug I wrote about 2 yrs ago (but with the wrong fix).
>>
>> http://lkml.iu.edu/hypermail/linux/kernel/1402.2/04697.html
>>
>> Unfortunately I didn't have a reproducer at all :/
> 
> Ah, bummer.
> 
>> The atomic_long_xchg() patch has several benefits over the naked barrier:
>>
>> 1. set_work_pool_and_clear_pending() has the same requirements as
>>    clear_work_data(); note that both require write barrier before and
>>    full barrier after.
> 
> clear_work_data() is only used by __cancel_work_timer() and there's no
> following execution or anything where rescheduling memory loads can
> cause any issue.

I meant this block:

	clear_work_data(work);

	/*
	 * Paired with prepare_to_wait() above so that either
	 * waitqueue_active() is visible here or !work_is_canceling() is
	 * visible there.
	 */
	smp_mb();


>> 2. xchg() et al imply full barrier before and full barrier after.
>>
>> 3. The naked barriers could be removed, while improving efficiency.
>>    On x86, mov + mfence => xchg
> 
> It's unlikely to make any measureable difference.  Is xchg() actually
> cheaper than store + rmb?

store + mfence (full barrier), yes. Roughly 2x faster.

https://lkml.org/lkml/2015/11/2/607


>> 4. Maybe fixes other hidden bugs.
>>    For example, I'm wondering if reordering with set_work_pwq/list_add_tail
>>    would be a problem; ie., what if work is visible on the worklist _before_
>>    data is initialized by set_work_pwq()?
> 
> Worklist is always accessed under the pool lock.

I see a lot of list_empty() checks while not holding pool lock, but I get
what you mean: that work visible on the worklist isn't actually inspected
without holding the pool lock.


> The barrier comes
> into play only because we're using bare PENDING bit for
> synchronization.

I see that it's only transitions of PENDING that need full barriers but it's
not clear from the semantics the conditions under which pending is already
set. If performance isn't the concern there, maybe those uses (where PENDING
is expected to already be set) should check and warn?


> I'm not necessarily against making all clearings of
> PENDING to be followed by a rmb or use xhcg.  Reasons 2-4 are pretty
> weak tho.

I agree 2 and 3 are not the best reasons.
Actually, it looks that I'm in the minority anyway, and that style-wise,
naked barrier is preferred.

Regards,
Peter Hurley

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ