[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <BANLkTikBEJa7bJJoLFU7NoiEgOjVHVG08A@mail.gmail.com>
Date: Tue, 12 Apr 2011 19:23:52 -0700
From: Linus Torvalds <torvalds@...ux-foundation.org>
To: NeilBrown <neilb@...e.de>
Cc: Dave Chinner <david@...morbit.com>,
Jens Axboe <jaxboe@...ionio.com>,
"hch@...radead.org" <hch@...radead.org>,
Mike Snitzer <snitzer@...hat.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"dm-devel@...hat.com" <dm-devel@...hat.com>,
"linux-raid@...r.kernel.org" <linux-raid@...r.kernel.org>
Subject: Re: [PATCH 05/10] block: remove per-queue plugging
On Tue, Apr 12, 2011 at 2:08 PM, NeilBrown <neilb@...e.de> wrote:
> On Wed, 13 Apr 2011 00:34:52 +1000 Dave Chinner <david@...morbit.com> wrote:
>>
>> Well, not really - now taking any sleeping lock or waiting on
>> anything can trigger a plug flush where previously you had to
>> explicitly issue them. I'm not saying what we had is better, just
>> that there are implicit flushes with your changes that are
>> inherently uncontrollable...
>
> It's not just sleeping locks - if preempt is enabled a schedule can happen at
> any time - at any depth. I've seen a spin_unlock do it.
Hmm. I don't think we should flush IO in the preemption path. That
smells wrong on many levels, just one of them being the "any time, any
depth".
It also sounds really wrong from an IO pattern standpoint. The process
is actually still running, and the IO flushing _already_ does the
"only if it's going to sleep" test, but it actually does it _wrong_.
The "current->state" check doesn't make sense for a preemption event,
because it's not actually going to sleep there.
So a patch like the attached (UNTESTED!) sounds like the right thing to do.
Whether it makes any difference for any MD issues, who knows.. But
considering that the unplugging already used to test for "prev->state
!= TASK_RUNNING", this is absolutely the right thing to do - that old
test was just broken.
Linus
View attachment "patch.diff" of type "text/x-patch" (1000 bytes)
Powered by blists - more mailing lists