lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 30 Apr 2015 15:28:21 +0100
From:	Howard Chu <hyc@...as.com>
To:	Daniel Phillips <daniel@...nq.net>,
	Mike Galbraith <umgwanakikbuti@...il.com>
Cc:	Dave Chinner <david@...morbit.com>, linux-kernel@...r.kernel.org,
	linux-fsdevel@...r.kernel.org, tux3@...3.org,
	Theodore Ts'o <tytso@....edu>,
	OGAWA Hirofumi <hirofumi@...l.parknet.co.jp>
Subject: Re: xfs: does mkfs.xfs require fancy switches to get decent
 performance? (was Tux3 Report: How fast can we fsync?)

Daniel Phillips wrote:
>
>
> On 04/30/2015 06:48 AM, Mike Galbraith wrote:
>> On Thu, 2015-04-30 at 05:58 -0700, Daniel Phillips wrote:
>>> On Thursday, April 30, 2015 5:07:21 AM PDT, Mike Galbraith wrote:
>>>> On Thu, 2015-04-30 at 04:14 -0700, Daniel Phillips wrote:
>>>>
>>>>> Lovely sounding argument, but it is wrong because Tux3 still beats XFS
>>>>> even with seek time factored out of the equation.
>>>>
>>>> Hm.  Do you have big-storage comparison numbers to back that?  I'm no
>>>> storage guy (waiting for holographic crystal arrays to obsolete all this
>>>> crap;), but Dave's big-storage guy words made sense to me.
>>>
>>> This has nothing to do with big storage. The proposition was that seek
>>> time is the reason for Tux3's fsync performance. That claim was easily
>>> falsified by removing the seek time.
>>>
>>> Dave's big storage words are there to draw attention away from the fact
>>> that XFS ran the Git tests four times slower than Tux3 and three times
>>> slower than Ext4. Whatever the big storage excuse is for that, the fact
>>> is, XFS obviously sucks at little storage.
>>
>> If you allocate spanning the disk from start of life, you're going to
>> eat seeks that others don't until later.  That seemed rather obvious and
>> straight forward.
>
> It is a logical falacy. It mixes a grain of truth (spreading all over the
> disk causes extra seeks) with an obvious falsehood (it is not necessarily
> the only possible way to avoid long term fragmentation).

You're reading into it what isn't there. Spreading over the disk isn't 
(just) about avoiding fragmentation - it's about delivering consistent 
and predictable latency. It is undeniable that if you start by only 
allocating from the fastest portion of the platter, you are going to see 
performance slow down over time. If you start by spreading allocations 
across the entire platter, you make the worst-case and average-case 
latency equal, which is exactly what a lot of folks are looking for.

>> He flat stated that xfs has passable performance on
>> single bit of rust, and openly explained why.  I see no misdirection,
>> only some evidence of bad blood between you two.
>
> Raising the spectre of theoretical fragmentation issues when we have not
> even begun that work is a straw man and intellectually dishonest. You have
> to wonder why he does it. It is destructive to our community image and
> harmful to progress.

It is a fact of life that when you change one aspect of an intimately 
interconnected system, something else will change as well. You have 
naive/nonexistent free space management now; when you design something 
workable there it is going to impact everything else you've already 
done. It's an easy bet that the impact will be negative, the only 
question is to what degree.

-- 
   -- Howard Chu
   CTO, Symas Corp.           http://www.symas.com
   Director, Highland Sun     http://highlandsun.com/hyc/
   Chief Architect, OpenLDAP  http://www.openldap.org/project/
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ