lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <200902172343.13838.nickpiggin@yahoo.com.au>
Date:	Tue, 17 Feb 2009 23:43:13 +1100
From:	Nick Piggin <nickpiggin@...oo.com.au>
To:	Hisashi Hifumi <hifumi.hisashi@....ntt.co.jp>
Cc:	Trond.Myklebust@...app.com, linux-nfs@...r.kernel.org,
	linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org
Subject: Re: [PATCH] NFS: Pagecache usage optimization on nfs

On Tuesday 17 February 2009 15:55:12 Hisashi Hifumi wrote:
> Hi, Trond.
>
> I wrote "is_partially_uptodate" aops for nfs client named
> nfs_is_partially_uptodate(). This aops checks that nfs_page is attached to
> a page and read IO to a page is within the range between wb_pgbase and
> wb_pgbase + wb_bytes of the nfs_page. If this aops succeed, we do not have
> to issue actual read IO to NFS server even if a page is not uptodate
> because the portion we want to read are uptodate. So with this patch random
> read/write mixed workloads or random read after random write workloads can
> be optimized and we can get performance improvement.
>
> I did benchmark test using sysbench.
>
> sysbench --num-threads=16 --max-requests=100000 --test=fileio
> --file-block-size=2K --file-total-size=200M --file-test-mode=rndrw
> --file-fsync-freq=0
> --file-rw-ratio=0.5 run
>
> The result was:
>
> -2.6.29-rc4
>
> Operations performed:  33356 Read, 66682 Write, 128 Other = 100166 Total
> Read 65.148Mb  Written 130.24Mb  Total transferred 195.39Mb  (3.1093Mb/sec)
>  1591.97 Requests/sec executed
>
> Test execution summary:
>     total time:                          62.8391s
>     total number of events:              100038
>     total time taken by event execution: 841.7603
>     per-request statistics:
>          min:                            0.0000s
>          avg:                            0.0084s
>          max:                            16.4564s
>          approx.  95 percentile:         0.0446s
>
> Threads fairness:
>     events (avg/stddev):           6252.3750/306.48
>     execution time (avg/stddev):   52.6100/0.38
>
>
> -2.6.29-rc4 + patch
>
> Operations performed:  33346 Read, 66662 Write, 128 Other = 100136 Total
> Read 65.129Mb  Written 130.2Mb  Total transferred 195.33Mb  (5.0113Mb/sec)
>  2565.81 Requests/sec executed
>
> Test execution summary:
>     total time:                          38.9772s
>     total number of events:              100008
>     total time taken by event execution: 339.6821
>     per-request statistics:
>          min:                            0.0000s
>          avg:                            0.0034s
>          max:                            1.6768s
>          approx.  95 percentile:         0.0200s
>
> Threads fairness:
>     events (avg/stddev):           6250.5000/302.04
>     execution time (avg/stddev):   21.2301/0.45
>
>
> I/O performance was significantly improved by following patch.

OK, but again this is not something too sane to do is it (ask for 2K IO
size on 4K page system)? What are the comparison results with 4K IO
size? I guess it will help some cases, but it's probably hard to find
realistic workloads that see such an improvement.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ