[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190925125409.GD18094@mit.edu>
Date: Wed, 25 Sep 2019 08:54:09 -0400
From: "Theodore Y. Ts'o" <tytso@....edu>
To: Dave Chinner <david@...morbit.com>
Cc: Konstantin Khlebnikov <khlebnikov@...dex-team.ru>,
Tejun Heo <tj@...nel.org>, linux-fsdevel@...r.kernel.org,
linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Jens Axboe <axboe@...nel.dk>, Michal Hocko <mhocko@...e.com>,
Mel Gorman <mgorman@...e.de>,
Johannes Weiner <hannes@...xchg.org>,
Linus Torvalds <torvalds@...ux-foundation.org>
Subject: Re: [PATCH v2] mm: implement write-behind policy for sequential file
writes
On Wed, Sep 25, 2019 at 05:18:54PM +1000, Dave Chinner wrote:
> > > ANd, really such strict writebehind behaviour is going to cause all
> > > sorts of unintended problesm with filesystems because there will be
> > > adverse interactions with delayed allocation. We need a substantial
> > > amount of dirty data to be cached for writeback for fragmentation
> > > minimisation algorithms to be able to do their job....
> >
> > I think most sequentially written files never change after close.
>
> There are lots of apps that write zeros to initialise and allocate
> space, then go write real data to them. Database WAL files are
> commonly initialised like this...
Fortunately, most of the time Enterprise Database files which are
initialized with a fd which is then kept open. And it's only a single
file. So that's a hueristic that's not too bad to handle so long as
it's only triggered when there are no open file descriptors on said
inode. If something is still keeping the file open, then we do need
to be very careful about writebehind.
That behind said, with databases, they are goind to be calling
fdatasync(2) and fsync(2) all the time, so it's unlikely writebehind
is goint to be that much of an issue, so long as the max writebehind
knob isn't set too insanely low. It's been over ten years since I
last looked at this, and so things may have very likely changed, but
one enterprise database I looked at would fallocate 32M, and then
write 32M of zeros to make sure blocks were marked as initialized, so
that further random writes wouldn't cause metadata updates.
Now, there *are* applications which log to files via append, and in
the worst case, they don't actually keep a fd open. Examples of this
would include scripts that call logger(1) very often. But in general,
taking into account whether or not there is still a fd holding the
inode open to influence how aggressively we do writeback does make
sense.
Finally, we should remember that this will impact battery life on
laptops. Perhaps not so much now that most laptops have SSD's instead
of HDD's, but aggressive writebehind does certainly have tradeoffs,
and what makes sense for a NVMe attached SSD is going to be very
different for a $2 USB thumb drive picked up at the checkout aisle of
Staples....
- Ted
Powered by blists - more mailing lists