lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sun, 1 Mar 2020 09:41:24 +1100
From:   Dave Chinner <david@...morbit.com>
To:     Kirill Tkhai <ktkhai@...tuozzo.com>
Cc:     Christoph Hellwig <hch@...radead.org>, tytso@....edu,
        viro@...iv.linux.org.uk, adilger.kernel@...ger.ca,
        snitzer@...hat.com, jack@...e.cz, ebiggers@...gle.com,
        riteshh@...ux.ibm.com, krisman@...labora.com, surajjs@...zon.com,
        dmonakhov@...il.com, mbobrowski@...browski.org, enwlinux@...il.com,
        sblbir@...zon.com, khazhy@...gle.com, linux-ext4@...r.kernel.org,
        linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org
Subject: Re: [PATCH RFC 5/5] ext4: Add fallocate2() support

On Fri, Feb 28, 2020 at 03:41:51PM +0300, Kirill Tkhai wrote:
> On 28.02.2020 00:56, Dave Chinner wrote:
> > On Thu, Feb 27, 2020 at 02:12:53PM +0300, Kirill Tkhai wrote:
> >> On 27.02.2020 10:33, Dave Chinner wrote:
> >>> On Wed, Feb 26, 2020 at 11:05:23PM +0300, Kirill Tkhai wrote:
> >>>> On 26.02.2020 18:55, Christoph Hellwig wrote:
> >>>>> On Wed, Feb 26, 2020 at 04:41:16PM +0300, Kirill Tkhai wrote:
> >>>>>> This adds a support of physical hint for fallocate2() syscall.
> >>>>>> In case of @physical argument is set for ext4_fallocate(),
> >>>>>> we try to allocate blocks only from [@phisical, @physical + len]
> >>>>>> range, while other blocks are not used.
> >>>>>
> >>>>> Sorry, but this is a complete bullshit interface.  Userspace has
> >>>>> absolutely no business even thinking of physical placement.  If you
> >>>>> want to align allocations to physical block granularity boundaries
> >>>>> that is the file systems job, not the applications job.
> >>>>
> >>>> Why? There are two contradictory actions that filesystem can't do at the same time:
> >>>>
> >>>> 1)place files on a distance from each other to minimize number of extents
> >>>>   on possible future growth;
> >>>
> >>> Speculative EOF preallocation at delayed allocation reservation time
> >>> provides this.
> >>>
> >>>> 2)place small files in the same big block of block device.
> >>>
> >>> Delayed allocation during writeback packs files smaller than the
> >>> stripe unit of the filesystem tightly.
> >>>
> >>> So, yes, we do both of these things at the same time in XFS, and
> >>> have for the past 10 years.
> >>>
> >>>> At initial allocation time you never know, which file will stop grow in some future,
> >>>> i.e. which file is suitable for compaction. This knowledge becomes available some time later.
> >>>> Say, if a file has not been changed for a month, it is suitable for compaction with
> >>>> another files like it.
> >>>>
> >>>> If at allocation time you can determine a file, which won't grow in the future, don't be afraid,
> >>>> and just share your algorithm here.
> >>>>
> >>>> In Virtuozzo we tried to compact ext4 with existing kernel interface:
> >>>>
> >>>> https://github.com/dmonakhov/e2fsprogs/blob/e4defrag2/misc/e4defrag2.c
> >>>>
> >>>> But it does not work well in many situations, and the main problem is blocks allocation
> >>>> in desired place is not possible. Block allocator can't behave excellent for everything.
> >>>>
> >>>> If this interface bad, can you suggest another interface to make block allocator to know
> >>>> the behavior expected from him in this specific case?
> >>>
> >>> Write once, long term data:
> >>>
> >>> 	fcntl(fd, F_SET_RW_HINT, RWH_WRITE_LIFE_EXTREME);
> >>>
> >>> That will allow the the storage stack to group all data with the
> >>> same hint together, both in software and in hardware.
> >>
> >> This is interesting option, but it only applicable before write is made. And it's only
> >> applicable on your own applications. My usecase is defragmentation of containers, where
> >> any applications may run. Most of applications never care whether long or short-term
> >> data they write.
> > 
> > Why is that a problem? They'll be using the default write hint (i.e.
> > NONE) and so a hint aware allocation policy will be separating that
> > data from all the other data written with specific hints...
> > 
> > And you've mentioned that your application has specific *never write
> > again* selection criteria for data it is repacking. And that
> > involves rewriting that data.  IOWs, you know exactly what policy
> > you want to apply before you rewrite the data, and so what other
> > applications do is completely irrelevant for your repacker...
> 
> It is not a rewriting data, there is moving data to new place with EXT4_IOC_MOVE_EXT.

"rewriting" is a technical term for reading data at rest and writing
it again, whether it be to the same location or to some other
location. Changing physical location of data, by definition,
requires rewriting data.

EXT4_IOC_MOVE_EXT = data rewrite + extent swap to update the
metadata in the original file to point at the new data. Hence it
appears to "move" from userspace perspective (hence the name) but
under the covers it is rewriting data and fiddling pointers...

> > What the filesystem does with the hint is up to the filesystem
> > and the policies that it's developers decide are appropriate. If
> > your filesystem doesn't do what you need, talk to the filesystem
> > developers about implementing the policy you require.
> 
> Do XFS kernel defrag interfaces allow to pack some randomly chosen
> small files in 1Mb blocks? Do they allow to pack small 4Kb file into
> free space after a big file like in example:

No. Randomly selecting small holes for small file writes is a
terrible idea from a performance perspective. Hence filling tiny
holes (not randomly!) is often only done for metadata allocation
(e.g. extent map blocks, which are largely random access anyway) or
if there is no other choice for data (e.g. at ENOSPC).

Cheers,

Dave.
-- 
Dave Chinner
david@...morbit.com

Powered by blists - more mailing lists