[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CA+55aFx=HdDMhK0bL8aNxOS83M3EOKGCLY59QzTF-jT+MPHXBw@mail.gmail.com>
Date: Fri, 11 Sep 2015 16:36:39 -0700
From: Linus Torvalds <torvalds@...ux-foundation.org>
To: Chris Mason <clm@...com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Josef Bacik <jbacik@...com>,
LKML <linux-kernel@...r.kernel.org>,
linux-fsdevel <linux-fsdevel@...r.kernel.org>,
Dave Chinner <david@...morbit.com>, Neil Brown <neilb@...e.de>,
Jan Kara <jack@...e.cz>, Christoph Hellwig <hch@....de>
Subject: Re: [PATCH] fs-writeback: drop wb->list_lock during blk_finish_plug()
On Fri, Sep 11, 2015 at 4:16 PM, Chris Mason <clm@...com> wrote:
>
> For 4.3 timeframes, what runs do you want to see numbers for:
>
> 1) revert
> 2) my hack
> 3) plug over multiple sbs (on different devices)
> 4) ?
Just 2 or 3.
I don't think the plain revert is all that interesting, and I think
the "anything else" is far too late for this merge window.
So we'll go with either (2) your patch (which I obviously don't
_like_, but apart from the ugliness I don't think there's anything
technically wrong with), or with (3) the "plug across a bigger area".
So the only issue with (3) is whether that's just "revert plus the
patch I sent out", or whether we should unplug/replug over the "wait
synchronously for an inode" case (iow, the
"inode_sleep_on_writeback()"). The existing plug code (that has the
spinlock issue) already has a "wait on inode" case, and did *not*
unplug over that call, but broadening the plugging further now ends up
having two of those "wait synchronosly on inode".
Are we really ok with waiting synchronously for an inode while holding
the plug? No chance of deadlock (waiting for IO that we've plugged)?
That issue is true even of the current code, though, and I have _not_
really thought that through, it's just a worry.
Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists