lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <28150949-24EF-42D9-87EF-D23B7C16DD50@linaro.org>
Date:   Tue, 11 Apr 2017 09:26:11 +0200
From:   Paolo Valente <paolo.valente@...aro.org>
To:     Andreas Herrmann <aherrmann@...e.com>
Cc:     Jens Axboe <axboe@...nel.dk>, linux-block@...r.kernel.org,
        linux-kernel@...r.kernel.org
Subject: Re: bfq-mq performance comparison to cfq


> Il giorno 10 apr 2017, alle ore 11:55, Paolo Valente <paolo.valente@...aro.org> ha scritto:
> 
>> 
>> Il giorno 10 apr 2017, alle ore 11:05, Andreas Herrmann <aherrmann@...e.com> ha scritto:
>> 
>> Hi Paolo,
>> 
>> I've looked at your WIP branch as of 4.11.0-bfq-mq-rc4-00155-gbce0818
>> and did some fio tests to compare the behavior to CFQ.
>> 
>> My understanding is that bfq-mq is supposed to be merged sooner or
>> later and then it will be the only reasonable I/O scheduler with
>> blk-mq for rotational devices. Hence I think it is interesting to see
>> what to expect performance-wise in comparison to CFQ which is usually
>> used for such devices with the legacy block layer.
>> 
>> I've just done simple tests iterating over number of jobs (1-8 as the
>> test system had 8 CPUs) for all (random/sequential) read/write
>> patterns. Fixed set of fio parameters used were '-size=5G
>> --group_reporting --ioengine=libaio --direct=1 --iodepth=1
>> --runtime=10'.
>> 
>> I've done 10 runs for each such configuration. The device used was an
>> older SAMSUNG HD103SJ 1TB disk, SATA attached. Results that stick out
>> the most are those for sequential reads and sequential writes:
>> 
>> * sequential reads
>> [0] - cfq, intel_pstate driver, powersave governor
>> [1] - bfq_mq, intel_pstate driver, powersave governor
>> 
>> jo             [0]               [1]
>> bs       mean     stddev    mean       stddev
>> 1 & 17060.300 &  77.090 & 17657.500 &  69.602
>> 2 & 15318.200 &  28.817 & 10678.000 & 279.070
>> 3 & 15403.200 &  42.762 &  9874.600 &  93.436
>> 4 & 14521.200 & 624.111 &  9918.700 & 226.425
>> 5 & 13893.900 & 144.354 &  9485.000 & 109.291
>> 6 & 13065.300 & 180.608 &  9419.800 &  75.043
>> 7 & 12169.600 &  95.422 &  9863.800 & 227.662
>> 8 & 12422.200 & 215.535 & 15335.300 & 245.764
>> 
>> * sequential writes
>> [0] - cfq, intel_pstate driver, powersave governor
>> [1] - bfq_mq, intel_pstate driver, powersave governor
>> 
>> jo            [0]               [1]
>> bs      mean     stddev    mean       stddev
>> 1 & 14171.300 & 80.796 & 14392.500 & 182.587
>> 2 & 13520.000 & 88.967 &  9565.400 & 119.400
>> 3 & 13396.100 & 44.936 &  9284.000 &  25.122
>> 4 & 13139.800 & 62.325 &  8846.600 &  45.926
>> 5 & 12942.400 & 45.729 &  8568.700 &  35.852
>> 6 & 12650.600 & 41.283 &  8275.500 & 199.273
>> 7 & 12475.900 & 43.565 &  8252.200 &  33.145
>> 8 & 12307.200 & 43.594 & 13617.500 & 127.773
>> 
>> With performance instead of powersave governor results were
>> (expectedly) higher but the pattern was the same -- bfq-mq shows a
>> "dent" for tests with 2-7 fio jobs. At the moment I have no
>> explanation for this behavior.
>> 
> 
> I have :)
> 
> BFQ, by default, is configured to privilege latency over throughput.
> In this respect, as various people and I happened to discuss a few
> times, even on these mailing lists, the only way to provide strong
> low-latency guarantees, at the moment, is through device idling.  The
> throughput loss you see is very likely to be the consequence of that
> idling.
> 
> Why does the throughput go back up at eight jobs?  Because, if many
> processes are born in a very short time interval, then BFQ understands
> that some multi-job task is being started.  And these parallel tasks
> usually prefer overall high throughput to single-process low latency.
> Then, BFQ does not idle the device for these processes.
> 
> That said, if you do always want maximum throughput, even at the
> expense of latency, then just switch off low-latency heuristics, i.e.,
> set low_latency to 0.  Depending on the device, setting slice_ilde to
> 0 may help a lot too (as well as with CFQ).  If the throughput is
> still low also after forcing BFQ to an only-throughput mode, then you
> hit some bug, and I'll have a little more work to do ...
> 

I forgot two pieces of information:
1) The throughput drop lasts only for a few seconds, after which BFQ
stops caring about the latency of the newborn fio processes, and aims
only at throughput.
2) One of my main goals, if and after BFQ is merged, is to get about
the same low-latency guarantees, without idling, and thus without
losing throughput.

Paolo


> Thanks,
> Paolo
> 
>> Regards,
>> Andreas

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ