[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CA+55aFx5t5YifPXhL2KdTZRFOwLgXLqrpXjdAJHygKhxmMyqNg@mail.gmail.com>
Date: Mon, 2 Oct 2017 16:08:46 -0700
From: Linus Torvalds <torvalds@...ux-foundation.org>
To: Dave Chinner <david@...morbit.com>
Cc: Konstantin Khlebnikov <khlebnikov@...dex-team.ru>,
linux-fsdevel <linux-fsdevel@...r.kernel.org>,
linux-mm <linux-mm@...ck.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Jens Axboe <axboe@...nel.dk>, Michal Hocko <mhocko@...e.com>,
Mel Gorman <mgorman@...e.de>,
Johannes Weiner <hannes@...xchg.org>,
Tejun Heo <tj@...nel.org>,
Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [PATCH RFC] mm: implement write-behind policy for sequential file writes
On Mon, Oct 2, 2017 at 3:45 PM, Dave Chinner <david@...morbit.com> wrote:
>
> Yup, it's a good idea. Needs some tweaking, though.
Probably a lot. 256kB seems very eager.
> If we block on close, it becomes:
I'm not at all suggesting blocking at cl;ose, just doing that final
async writebehind (assuming we started any earlier write-behind) so
that the writeour ends up seeing the whole file, rather than
"everything but the very end"
> Perhaps we need to think about a small per-backing dev threshold
> where the behaviour is the current writeback behaviour, but once
> it's exceeded we then switch to write-behind so that the amount of
> dirty data doesn't exceed that threshold.
Yes, that sounds like a really good idea, and as a way to avoid
starting too early.
However, part of the problem there is that we don't have that
historical "what is dirty", because it would often be in previous
files. Konstantin's patch is simple partly because it has only that
single-file history to worry about.
You could obviously keep that simplicity, and just accept the fact
that the early dirty data ends up being kept dirty, and consider it
just the startup cost and not even try to do the write-behind on that
oldest data.
But I do agree that 256kB is a very early threshold, and likely too
small for many cases.
Linus
Powered by blists - more mailing lists