lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <555238B5.9050705@ontolab.com>
Date:	Tue, 12 May 2015 19:30:29 +0200
From:	Christian Stroetmann <stroetmann@...olab.com>
To:	Daniel Phillips <daniel@...nq.net>
CC:	David Lang <david@...g.hm>, Pavel Machek <pavel@....cz>,
	Howard Chu <hyc@...as.com>,
	Mike Galbraith <umgwanakikbuti@...il.com>,
	Dave Chinner <david@...morbit.com>,
	linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org,
	tux3@...3.org, Theodore Ts'o <tytso@....edu>,
	OGAWA Hirofumi <hirofumi@...l.parknet.co.jp>
Subject: Re: xfs: does mkfs.xfs require fancy switches to get decent performance?
 (was Tux3 Report: How fast can we fsync?)

Am 12.05.2015 06:36, schrieb Daniel Phillips:
> Hi David,
>
> On 05/11/2015 05:12 PM, David Lang wrote:
>> On Mon, 11 May 2015, Daniel Phillips wrote:
>>
>>> On 05/11/2015 03:12 PM, Pavel Machek wrote:
>>>>>> It is a fact of life that when you change one aspect of an intimately interconnected system,
>>>>>> something else will change as well. You have naive/nonexistent free space management now; when you
>>>>>> design something workable there it is going to impact everything else you've already done. It's an
>>>>>> easy bet that the impact will be negative, the only question is to what degree.
>>>>> You might lose that bet. For example, suppose we do strictly linear allocation
>>>>> each delta, and just leave nice big gaps between the deltas for future
>>>>> expansion. Clearly, we run at similar or identical speed to the current naive
>>>>> strategy until we must start filling in the gaps, and at that point our layout
>>>>> is not any worse than XFS, which started bad and stayed that way.
>>>> Umm, are you sure. If "some areas of disk are faster than others" is
>>>> still true on todays harddrives, the gaps will decrease the
>>>> performance (as you'll "use up" the fast areas more quickly).
>>> That's why I hedged my claim with "similar or identical". The
>>> difference in media speed seems to be a relatively small effect
>>> compared to extra seeks. It seems that XFS puts big spaces between
>>> new directories, and suffers a lot of extra seeks because of it.
>>> I propose to batch new directories together initially, then change
>>> the allocation goal to a new, relatively empty area if a big batch
>>> of files lands on a directory in a crowded region. The "big" gaps
>>> would be on the order of delta size, so not really very big.
>> This is an interesting idea, but what happens if the files don't arrive as a big batch, but rather
>> trickle in over time (think a logserver that if putting files into a bunch of directories at a
>> fairly modest rate per directory)
> If files are trickling in then we can afford to spend a lot more time
> finding nice places to tuck them in. Log server files are an especially
> irksome problem for a redirect-on-write filesystem because the final
> block tends to be rewritten many times and we must move it to a new
> location each time, so every extent ends up as one block. Oh well. If
> we just make sure to have some free space at the end of the file that
> only that file can use (until everywhere else is full) then the long
> term result will be slightly ravelled blocks that nonetheless tend to
> be on the same track or flash block as their logically contiguous
> neighbours. There will be just zero or one empty data blocks mixed
> into the file tail as we commit the tail block over and over with the
> same allocation goal. Sometimes there will be a block or two of
> metadata as well, which will eventually bake themselves into the
> middle of contiguous data and stop moving around.
>
> Putting this together, we have:
>
>    * At delta flush, break out all the log type files
>    * Dedicate some block groups to append type files
>    * Leave lots of space between files in those block groups
>    * Peek at the last block of the file to set the allocation goal
>
> Something like that. What we don't want is to throw those files into
> the middle of a lot of rewrite-all files, messing up both kinds of file.
> We don't care much about keeping these files near the parent directory
> because one big seek per log file in a grep is acceptable, we just need
> to avoid thousands of big seeks within the file, and not dribble single
> blocks all over the disk.
>
> It would also be nice to merge together extents somehow as the final
> block is rewritten. One idea is to retain the final block dirty until
> the next delta, and write it again into a contiguous position, so the
> final block is always flushed twice. We already have the opportunistic
> merge logic, but the redirty behavior and making sure it only happens
> to log files would be a bit fiddly.
>
> We will also play the incremental defragmentation card at some point,
> but first we should try hard to control fragmentation in the first
> place. Tux3 is well suited to online defragmentation because the delta
> commit model makes it easy to move things around efficiently and safely,
> but it does generate extra IO, so as a basic mechanism it is not ideal.
> When we get to piling on features, that will be high on the list,
> because it is relatively easy, and having that fallback gives a certain
> sense of security.

So we are again at some more features of SASOS4Fun.

Said this, I can see as an alleged troll expert the agenda and strategy 
behind this and related threads, but still no usable code/file system at 
all and hence nothing that even might be ready for merging, as I 
understand the statements of the file system gurus.

So it is time for the developer(s) to take decisions, what should be 
implement respectively manifested in code eventually and then show the 
complete result, so that others can make the tests and the benchmarks.



Thanks
Best Regards
Do not feed the trolls.
C.S.

>> And when you then decide that you have to move the directory/file info, doesn't that create a
>> potentially large amount of unexpected IO that could end up interfering with what the user is trying
>> to do?
> Right, we don't like that and don't plan to rely on it. What we hope
> for is behavior that, when you slowly stir the pot, tends to improve the
> layout just as often as it degrades it. It may indeed become harder to
> find ideal places to put things as time goes by, but we also gain more
> information to base decisions on.
>
> Regards,
>
> Daniel
> --
> To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
> the body of a message to majordomo@...r.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ