lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 12 Sep 2011 17:09:20 +0400
From:	Maxim Patlasov <maxim.patlasov@...il.com>
To:	Shaohua Li <shli@...nel.org>
Cc:	shaohua.li@...el.com, axboe@...nel.dk, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/1] CFQ: fix handling 'deep' cfqq

Shaohua,

>> So the key problem here is how to detect if a device is fast. Doing
>> the detection
>> in the dispatch stage always can't give us correct result. A fast device really
>> should be requests can be finished in short time. So I have something attached.
>> In my environment, a hard disk is detected slow and a ssd is detected fast, but
>> I haven't run any benchmark so far. How do you think about it?
>
> Thanks for the patch, I'll test it in several h/w configurations soon
> and let you know about results.

1. Single slow disk (ST3200826AS). Eight instances of aio-stress, cmd-line:

# aio-stress -a 4 -b 4 -c 1 -r 4 -O -o 0 -t 1 -d 1 -i 1 -s 16 f1_$I
f2_$I f3_$I f4_$I

Aggreagate throughput:

Pristine 3.1.0-rc5 (CFQ): 3.77 MB/s
Pristine 3.1.0-rc5 (noop): 2.63 MB/s
Pristine 3.1.0-rc5 (CFQ, slice_idle=0): 2.81 MB/s
3.1.0-rc5 + my patch (CFQ): 5.76 MB/s
3.1.0-rc5 + your patch (CFQ): 5.61 MB/s

2. Four modern disks (WD1003FBYX) assembled in RAID-0 (Adaptec
AAC-RAID (rev 09) 256Mb RAM). Eight instances of aio-stress with
think-time 1msec:

> --- aio-stress-orig.c	2011-08-16 17:00:04.000000000 -0400
> +++ aio-stress.c	2011-08-18 14:49:31.000000000 -0400
> @@ -884,6 +884,7 @@ static int run_active_list(struct thread
>      }
>      if (num_built) {
>  	ret = run_built(t, num_built, t->iocbs);
> +	usleep(1000);
>  	if (ret < 0) {
>  	    fprintf(stderr, "error %d on run_built\n", ret);
>  	    exit(1);

Cmd-line:

# aio-stress -a 4 -b 4 -c 1 -r 4 -O -o 0 -t 1 -d 1 -i 1 f1_$I f2_$I f3_$I f4_$I

Aggreagate throughput:

Pristine 3.1.0-rc5 (CFQ): 63.67 MB/s
Pristine 3.1.0-rc5 (noop): 100.8 MB/s
Pristine 3.1.0-rc5 (CFQ, slice_idle=0): 105.63 MB/s
3.1.0-rc5 + my patch (CFQ): 105.59 MB/s
3.1.0-rc5 + your patch (CFQ): 14.36 MB/s

So, to meet needs of striped raids, it's not enough to measure service
time of separate requests. We need somehow to measure whether given
hdd/raid is able to service many requests simultaneously in an
effective way.

Thanks,
Maxim
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ