lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4FEC3FAA.1060503@redhat.com>
Date:	Thu, 28 Jun 2012 07:27:38 -0400
From:	Ric Wheeler <rwheeler@...hat.com>
To:	Eric Sandeen <sandeen@...hat.com>
CC:	"Theodore Ts'o" <tytso@....edu>,
	Ric Wheeler <ricwheeler@...il.com>,
	Fredrick <fjohnber@...o.com>, linux-ext4@...r.kernel.org,
	Andreas Dilger <adilger@...ger.ca>, wenqing.lz@...bao.com
Subject: Re: ext4_fallocate

On 06/27/2012 07:02 PM, Eric Sandeen wrote:
> On 6/27/12 3:30 PM, Theodore Ts'o wrote:
>> On Tue, Jun 26, 2012 at 04:44:08PM -0400, Eric Sandeen wrote:
>>>> I tried running this fio recipe on v3.3, which I think does a decent job of
>>>> emulating the situation (fallocate 1G, do random 1M writes into it, with
>>>> fsyncs after each):
>>>>
>>>> [test]
>>>> filename=testfile
>>>> rw=randwrite
>>>> size=1g
>>>> filesize=1g
>>>> bs=1024k
>>>> ioengine=sync
>>>> fallocate=1
>>>> fsync=1
>> A better workload would be to use a blocksize of 4k.  By using a
>> blocksize of 1024k, it's not surprising that the metadata overhead is
>> in the noise.
>>
>> Try something like this; this will cause the extent tree overhead to
>> be roughly equal to the data block I/O.
>>
>> [global]
>> rw=randwrite
>> size=128m
>> filesize=1g
>> bs=4k
>> ioengine=sync
>> fallocate=1
>> fsync=1
>>
>> [thread1]
>> filename=testfile
> Well, ok ... TBH I changed it to size=16m to finish in under 20m.... so here are the results:
>
> fallocate 1g, do 16m of 4k random IOs, sync after each:
>
> # for I in a b c; do rm -f testfile; echo 3 > /proc/sys/vm/drop_caches; fio tytso.fio | grep 2>&1 WRITE; done
>
>    WRITE: io=16384KB, aggrb=154KB/s, minb=158KB/s, maxb=158KB/s, mint=105989msec, maxt=105989msec
>    WRITE: io=16384KB, aggrb=163KB/s, minb=167KB/s, maxb=167KB/s, mint=99906msec, maxt=99906msec
>    WRITE: io=16384KB, aggrb=176KB/s, minb=180KB/s, maxb=180KB/s, mint=92791msec, maxt=92791msec
>
> same, but overwrite pre-written 1g file (same as the expose-my-data option ;)
>
> # dd if=/dev/zero of=testfile bs=1M count=1024
> # for I in a b c; do echo 3 > /proc/sys/vm/drop_caches; fio tytso.fio | grep 2>&1 WRITE; done
>
>    WRITE: io=16384KB, aggrb=164KB/s, minb=168KB/s, maxb=168KB/s, mint=99515msec, maxt=99515msec
>    WRITE: io=16384KB, aggrb=164KB/s, minb=168KB/s, maxb=168KB/s, mint=99371msec, maxt=99371msec
>    WRITE: io=16384KB, aggrb=164KB/s, minb=168KB/s, maxb=168KB/s, mint=99677msec, maxt=99677msec
>
> so no great surprise, small synchronous 4k writes have terrible performance, but I'm still not seeing a lot of fallocate overhead.
>
> xfs, FWIW:
>
> # for I in a b c; do rm -f testfile; echo 3 > /proc/sys/vm/drop_caches; fio tytso.fio | grep 2>&1 WRITE; done
>
>    WRITE: io=16384KB, aggrb=202KB/s, minb=207KB/s, maxb=207KB/s, mint=80980msec, maxt=80980msec
>    WRITE: io=16384KB, aggrb=203KB/s, minb=208KB/s, maxb=208KB/s, mint=80508msec, maxt=80508msec
>    WRITE: io=16384KB, aggrb=204KB/s, minb=208KB/s, maxb=208KB/s, mint=80291msec, maxt=80291msec
>
> # dd if=/dev/zero of=testfile bs=1M count=1024
> # for I in a b c; do echo 3 > /proc/sys/vm/drop_caches; fio tytso.fio | grep 2>&1 WRITE; done
>
>    WRITE: io=16384KB, aggrb=197KB/s, minb=202KB/s, maxb=202KB/s, mint=82869msec, maxt=82869msec
>    WRITE: io=16384KB, aggrb=203KB/s, minb=208KB/s, maxb=208KB/s, mint=80348msec, maxt=80348msec
>    WRITE: io=16384KB, aggrb=202KB/s, minb=207KB/s, maxb=207KB/s, mint=80827msec, maxt=80827msec
>
> Again, I think this is just a diabolical workload ;)
>
> -Eric

We need to keep in mind what the goal of pre-allocation is (should be?) - spend 
a bit of extra time doing the allocation call so we get really good, contiguous 
layout on disk which ultimately will help in streaming read/write workloads.

If you have a reasonably small file, pre-allocation is probably simply a waste 
of time - you would be better off overwriting the maximum file size with all 
zeros (even a 1GB file would take only a few seconds).

If the file is large enough to be interesting, I think that we might want to 
think about a scheme that would bring small random IO's more into line with the 
1MB results Eric saw.

One way to do that might be to have a minimum "chunk" that we would zero out for 
any IO to an allocated but unwritten extent. You write 4KB to the middle of said 
region, we pad up and zero out to the nearest MB with zeros.

Note for the target class of drives (S-ATA) that Ted mentioned earlier, doing a 
random 4KB write vs a 1MB write is not that much slower (you need to pay the 
head movement costs already).  Of course, the sweet spot might turn out to be a 
bit smaller or larger.

Ric


--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ