[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <BANLkTi=0_bv7s6=i2v-iP0vQmHHT3=tm8w@mail.gmail.com>
Date: Wed, 18 May 2011 02:46:30 -0700
From: Linus Torvalds <torvalds@...ux-foundation.org>
To: Tejun Heo <tj@...nel.org>
Cc: Jens Axboe <axboe@...nel.dk>, Sitsofe Wheeler <sitsofe@...oo.com>,
Borislav Petkov <bp@...en8.de>, Meelis Roos <mroos@...ux.ee>,
Andrew Morton <akpm@...ux-foundation.org>,
Kay Sievers <kay.sievers@...y.org>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH RESEND 2/3 v2.6.39-rc7] block: make disk_block_events()
properly wait for work cancellation
On Tue, May 17, 2011 at 10:07 PM, Tejun Heo <tj@...nel.org> wrote:
>
>> Just make the semaphore protect the count - and you're done.
>
> Yeah, with that gone, we don't even need the open-coding inside
> disk_check_events(). It can simply call syncing block and unblock.
> But, do you want that in -rc7? Unnecessarily complicated as the
> current code may be, converting the lock to mutex is a larger change
> than adding an outer mutex and I think it would be better to do that
> during the next cycle.
Quite frankly. right now I think I need to just release 2.6.39, and
then for 2.6.40 merge the trivial
mutex_lock(&ev->mutex);
if (!ev->block++)
cancel_delayed_work_sync(&ev->dwork);
mutex_unlock(&ev->mutex);
with a cc: stable for backporting.
I'd _much_ prefer simple obvious code than have a outer mutex etc.
Just make the rule be that "blocked" is protected by the new
semaphore. I don't think it's used very often, and anybody who wants
to block disk events needs to be in blockable context in order to wait
for the delayed work cancel, right? So we can't be in some atomic
context inside some other spinlock anyway, afaik. And there can be no
lock order issues, since this would always be a new inner lock.
Hmm?
Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists