lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <DE2B8BBD-EFC6-46C7-8B0E-DC2CBCBDBC6A@unimore.it>
Date:	Wed, 11 Jun 2014 22:39:31 +0200
From:	Paolo Valente <paolo.valente@...more.it>
To:	Pavel Machek <pavel@....cz>
Cc:	Tejun Heo <tj@...nel.org>, Jens Axboe <axboe@...nel.dk>,
	Li Zefan <lizefan@...wei.com>,
	Fabio Checconi <fchecconi@...il.com>,
	Arianna Avanzini <avanzini.arianna@...il.com>,
	linux-kernel@...r.kernel.org,
	containers@...ts.linux-foundation.org, cgroups@...r.kernel.org
Subject: Re: BFQ speed tests [was Re: [PATCH RFC - TAKE TWO - 00/12] New version of the BFQ I/O Scheduler]


Il giorno 04/giu/2014, alle ore 12:03, Pavel Machek <pavel@....cz> ha scritto:

> Hi!
> 
>> Should this attempt be useless as well, I will, if you do not mind, try by asking you more details about your system and reproducing your configuration as much as I can.
>> 
> 
> Try making BFQ the default scheduler. That seems to break it for me,
> when selected at runtime, it looks stable.

As I have already written to you privately, we have fixed the bug. It was a
clerical error, made while turning the original patchset into the series of
patches we have then submitted.

The new patchset is available here:
http://algogroup.unimore.it/people/paolo/disk_sched/debugging-patches/3.16.0-rc0-v7rc5.tgz

I’m not submitting this new, fixed patchset by email, because, before doing that,
we want to apply all the changes recommended by Tejun, and try to turn the
new patchset into a 'transformer' of cfq into bfq (of course, should it be better
to proceed in a different way also for this intermediate new version of bfq,
we are willing to).

> 
> Anyway, here are some speed tests. Background load:
> […]
> root@duo:/data/tmp# echo cfq > /sys/block/sda/queue/scheduler 
> root@duo:/data/tmp# echo 3 > /proc/sys/vm/drop_caches
> root@duo:/data/tmp# cat /dev/zero > delme; cat /dev/zero > delme;cat
> /dev/zero > delme;cat /dev/zero > delme;cat /dev/zero > delme;cat
> /dev/zero > delme
> 
> (Machine was running out of disk space.)
> 
> (I alternate between cfq and bfq).
> 
> Benchmark. I chose git describe because it is part of kernel build
> sometimes .. and I actually wait for that.
> […]

We have solved also this regression, related to both the queue-merge
mechanism and the heuristic for providing a low-latency to soft real-time
applications. The new patchset contains also this fix. We have repeated
your tests (and other similar tests) with this fixed version of bfq.
These are now our results with your tests.

# Test with background writes

[root@...-testbed data]# echo cfq > /sys/block/sda/queue/scheduler
[root@...-testbed data]# echo 3 > /proc/sys/vm/drop_caches
[root@...-testbed data]# cat /dev/zero > delme; cat /dev/zero > delme;cat
/dev/zero > delme;cat /dev/zero > delme;cat /dev/zero > delme;cat
/dev/zero > delme

[root@...-testbed linux-lkml]# time git describe
v3.15-rc8-78-gd531c25

# BFQ
0.24user 0.14system 0:07.42elapsed 5%CPU
# CFQ
0.24user 0.16system 0:08.39elapsed 4%CPU
# BFQ
0.25user 0.15system 0:08.45elapsed 4%CPU
# CFQ
0.26user 0.15system 0:09.11elapsed 4%CPU

# Results without background workload

# BFQ
0.23user 0.12system 0:07.23elapsed 4%CPU
# CFQ
0.25user 0.13system 0:07.36elapsed 5%CPU
# BFQ
0.23user 0.14system 0:07.24elapsed 5%CPU
# CFQ
0.22user 0.14system 0:07.36elapsed 5%CPU

Any feedback on these and other tests is more than welcome.

Thanks,
Paolo

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ