[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <9EB760CE-0028-4766-AE9D-6E90028D8579@linaro.org>
Date: Thu, 22 Aug 2019 10:58:22 +0200
From: Paolo Valente <paolo.valente@...aro.org>
To: Tejun Heo <tj@...nel.org>
Cc: Jens Axboe <axboe@...nel.dk>, newella@...com, clm@...com,
Josef Bacik <josef@...icpanda.com>, dennisz@...com,
Li Zefan <lizefan@...wei.com>,
Johannes Weiner <hannes@...xchg.org>,
linux-kernel <linux-kernel@...r.kernel.org>,
linux-block <linux-block@...r.kernel.org>, kernel-team@...com,
cgroups@...r.kernel.org, ast@...nel.org, daniel@...earbox.net,
kafai@...com, songliubraving@...com, yhs@...com,
bpf@...r.kernel.org
Subject: Re: [PATCHSET block/for-next] IO cost model based work-conserving
porportional controller
> Il giorno 20 ago 2019, alle ore 17:19, Tejun Heo <tj@...nel.org> ha scritto:
>
> Hello, Paolo.
>
> On Tue, Aug 20, 2019 at 05:04:25PM +0200, Paolo Valente wrote:
>> and makes one fio instance generate I/O for each group. The bandwidth
>> reported above is that reported by the fio instance emulating the
>> target client.
>>
>> Am I missing something?
>
> If you didn't configure QoS targets, the controller is using device
> qdepth saturation as the sole guidance in determining whether the
> device needs throttling. Please try configuring the target latencies.
> The bandwidth you see for single stream of rand ios should have direct
> correlation with how the latency targets are configured. The head
> letter for the patchset has some examples.
>
Ok, I tried with the parameters reported for a SATA SSD:
rpct=95.00 rlat=10000 wpct=95.00 wlat=20000 min=50.00 max=400.00
and with a simpler configuration [1]: one target doing random reads
and only four interferers doing sequential reads, with all the
processes (groups) having the same weight.
But there seemed to be little or no control on I/O, because the target
got only 1.84 MB/s, against 1.15 MB/s without any control.
So I tried with rlat=1000 and rlat=100.
Control did improve, with same results for both values of rlat. The
problem is that these results still seem rather bad, both in terms of
throughput guaranteed to the target and in terms of total throughput.
Here are results compared with BFQ (throughputs measured in MB/s):
io.weight BFQ
target's throughput 3.415 6.224
total throughput 159.14 321.375
Am I doing something else wrong?
Thanks,
Paolo
[1] sudo ./bandwidth-latency.sh -t randread -s none -b weight -n 4
> Thanks.
>
> --
> tejun
Powered by blists - more mailing lists