[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160426151507.GK7822@mtj.duckdns.org>
Date: Tue, 26 Apr 2016 11:15:07 -0400
From: Tejun Heo <tj@...nel.org>
To: Peter Hurley <peter@...leysoftware.com>
Cc: Roman Pen <roman.penyaev@...fitbricks.com>,
Jens Axboe <axboe@...nel.dk>, linux-block@...r.kernel.org,
linux-kernel@...r.kernel.org, David Howells <dhowells@...hat.com>
Subject: Re: [PATCH 1/1] [RFC] workqueue: fix ghost PENDING flag while doing
MQ IO
Hello, Peter.
On Mon, Apr 25, 2016 at 06:22:01PM -0700, Peter Hurley wrote:
> This is the same bug I wrote about 2 yrs ago (but with the wrong fix).
>
> http://lkml.iu.edu/hypermail/linux/kernel/1402.2/04697.html
>
> Unfortunately I didn't have a reproducer at all :/
Ah, bummer.
> The atomic_long_xchg() patch has several benefits over the naked barrier:
>
> 1. set_work_pool_and_clear_pending() has the same requirements as
> clear_work_data(); note that both require write barrier before and
> full barrier after.
clear_work_data() is only used by __cancel_work_timer() and there's no
following execution or anything where rescheduling memory loads can
cause any issue.
> 2. xchg() et al imply full barrier before and full barrier after.
>
> 3. The naked barriers could be removed, while improving efficiency.
> On x86, mov + mfence => xchg
It's unlikely to make any measureable difference. Is xchg() actually
cheaper than store + rmb?
> 4. Maybe fixes other hidden bugs.
> For example, I'm wondering if reordering with set_work_pwq/list_add_tail
> would be a problem; ie., what if work is visible on the worklist _before_
> data is initialized by set_work_pwq()?
Worklist is always accessed under the pool lock. The barrier comes
into play only because we're using bare PENDING bit for
synchronization. I'm not necessarily against making all clearings of
PENDING to be followed by a rmb or use xhcg. Reasons 2-4 are pretty
weak tho.
Thanks.
--
tejun
Powered by blists - more mailing lists