lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 24 Sep 2019 12:29:34 +0300
From:   Konstantin Khlebnikov <khlebnikov@...dex-team.ru>
To:     Linus Torvalds <torvalds@...ux-foundation.org>
Cc:     linux-fsdevel <linux-fsdevel@...r.kernel.org>,
        Linux-MM <linux-mm@...ck.org>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        Jens Axboe <axboe@...nel.dk>, Michal Hocko <mhocko@...e.com>,
        Dave Chinner <david@...morbit.com>,
        Mel Gorman <mgorman@...e.de>,
        Johannes Weiner <hannes@...xchg.org>, Tejun Heo <tj@...nel.org>
Subject: Re: [PATCH v2] mm: implement write-behind policy for sequential file
 writes

On 21/09/2019 02.05, Linus Torvalds wrote:
> On Fri, Sep 20, 2019 at 12:35 AM Konstantin Khlebnikov
> <khlebnikov@...dex-team.ru> wrote:
>>
>> This patch implements write-behind policy which tracks sequential writes
>> and starts background writeback when file have enough dirty pages.
> 
> Apart from a spelling error ("contigious"), my only reaction is that
> I've wanted this for the multi-file writes, not just for single big
> files.
> 
> Yes, single big files may be a simpler and perhaps the "10% effort for
> 90% of the gain", and thus the right thing to do, but I do wonder if
> you've looked at simply extending it to cover multiple files when
> people copy a whole directory (or unpack a tar-file, or similar).
> 
> Now, I hear you say "those are so small these days that it doesn't
> matter". And maybe you're right. But partiocularly for slow media,
> triggering good streaming write behavior has been a problem in the
> past.
> 
> So I'm wondering whether the "writebehind" state should perhaps be
> considered be a process state, rather than "struct file" state, and
> also start triggering for writing smaller files.

It's simple to extend existing state with per-task counter of sequential
writes to detect patterns like unpacking tarball with small files.
After reaching some threshold write-behind could flush files in at close.

But in this case it's hard to wait previous writes to limit amount of
requests and pages in writeback for each stream.

Theoretically we could build chain of inodes for delaying and batching.

> 
> Maybe this was already discussed and people decided that the big-file
> case was so much easier that it wasn't worth worrying about
> writebehind for multiple files.
> 
>              Linus
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ