[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20171003000849.GK15067@dastard>
Date: Tue, 3 Oct 2017 11:08:49 +1100
From: Dave Chinner <david@...morbit.com>
To: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Konstantin Khlebnikov <khlebnikov@...dex-team.ru>,
linux-fsdevel <linux-fsdevel@...r.kernel.org>,
linux-mm <linux-mm@...ck.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Jens Axboe <axboe@...nel.dk>, Michal Hocko <mhocko@...e.com>,
Mel Gorman <mgorman@...e.de>,
Johannes Weiner <hannes@...xchg.org>,
Tejun Heo <tj@...nel.org>,
Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [PATCH RFC] mm: implement write-behind policy for sequential
file writes
On Mon, Oct 02, 2017 at 04:08:46PM -0700, Linus Torvalds wrote:
> On Mon, Oct 2, 2017 at 3:45 PM, Dave Chinner <david@...morbit.com> wrote:
> >
> > Yup, it's a good idea. Needs some tweaking, though.
>
> Probably a lot. 256kB seems very eager.
>
> > If we block on close, it becomes:
>
> I'm not at all suggesting blocking at cl;ose, just doing that final
> async writebehind (assuming we started any earlier write-behind) so
> that the writeour ends up seeing the whole file, rather than
> "everything but the very end"
That's fine by me - we already do that in certain cases - but
AFAICT that's not the way the writebehind code presented
works. If the file is larger than the async write behind size then
it will also block waiting for previous writebehind to complete.
I think all we'd need is a call is filemap_fdatawrite()....
> > Perhaps we need to think about a small per-backing dev threshold
> > where the behaviour is the current writeback behaviour, but once
> > it's exceeded we then switch to write-behind so that the amount of
> > dirty data doesn't exceed that threshold.
>
> Yes, that sounds like a really good idea, and as a way to avoid
> starting too early.
>
> However, part of the problem there is that we don't have that
> historical "what is dirty", because it would often be in previous
> files. Konstantin's patch is simple partly because it has only that
> single-file history to worry about.
>
> You could obviously keep that simplicity, and just accept the fact
> that the early dirty data ends up being kept dirty, and consider it
> just the startup cost and not even try to do the write-behind on that
> oldest data.
I'm not sure we need to care about that - the bdi knows how much
dirty data there is on the device, and so we can switch from
device-based writeback to per-file writeback at that point. If we
we trigger a background flush of all the existing dirty
data when we switch modes then we wouldn't leave any of it hanging
around for ages while other file data gets written...
Cheers,
Dave.
--
Dave Chinner
david@...morbit.com
Powered by blists - more mailing lists