[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <D9F6BC6D-FEB3-40CA-A33C-F501AE4434F0@linaro.org>
Date: Mon, 2 Sep 2019 21:43:25 +0200
From: Paolo Valente <paolo.valente@...aro.org>
To: Tejun Heo <tj@...nel.org>
Cc: Jens Axboe <axboe@...nel.dk>, newella@...com, clm@...com,
Josef Bacik <josef@...icpanda.com>, dennisz@...com,
Li Zefan <lizefan@...wei.com>,
Johannes Weiner <hannes@...xchg.org>,
linux-kernel <linux-kernel@...r.kernel.org>,
linux-block <linux-block@...r.kernel.org>, kernel-team@...com,
cgroups@...r.kernel.org, ast@...nel.org, daniel@...earbox.net,
kafai@...com, songliubraving@...com, yhs@...com,
bpf@...r.kernel.org
Subject: Re: [PATCHSET block/for-next] IO cost model based work-conserving
porportional controller
> Il giorno 2 set 2019, alle ore 17:56, Tejun Heo <tj@...nel.org> ha scritto:
>
> On Mon, Sep 02, 2019 at 05:45:50PM +0200, Paolo Valente wrote:
>> Thanks for this extra explanations. It is a little bit difficult for
>> me to understand how the min/max teaks for exactly, but you did give
>> me the general idea.
>
> It just limits how far high and low the IO issue rate, measured in
> cost, can go. ie. if max is at 200%, the controller won't issue more
> than twice of what the cost model says 100% is.
>
>> Are these results in line with your expectations? If they are, then
>> I'd like to extend benchmarks to more mixes of workloads. Or should I
>> try some other QoS configuration first?
>
> They aren't. Can you please include the content of io.cost.qos and
> io.cost.model before each run? Note that partial writes to subset of
> parameters don't clear other parameters.
>
Yep. I've added the printing of the two parameters in the script, and
I'm pasting the whole output, in case you could get also some other
useful piece of information from it.
$ sudo ./bandwidth-latency.sh -t randread -s none -b weight -n 7 -d 20
Switching to none for sda
echo "8:0 enable=1 rpct=95 rlat=2500 wpct=95 wlat=5000" > /cgroup/io.cost.qos
/cgroup/io.cost.qos 8:0 enable=1 ctrl=user rpct=95.00 rlat=2500 wpct=95.00 wlat=5000 min=1.00 max=10000.00
/cgroup/io.cost.model 8:0 ctrl=auto model=linear rbps=488636629 rseqiops=8932 rrandiops=8518 wbps=427891549 wseqiops=28755 wrandiops=21940
Not changing weight/limits for interferer group 0
Not changing weight/limits for interferer group 1
Not changing weight/limits for interferer group 2
Not changing weight/limits for interferer group 3
Not changing weight/limits for interferer group 4
Not changing weight/limits for interferer group 5
Not changing weight/limits for interferer group 6
Not changing weight/limits for interfered
Starting Interferer group 0
start_fio_jobs InterfererGroup0 0 default read MAX linear 1 1 0 0 4k /home/paolo/local-S/bandwidth-latency/../workfiles/largefile0
Starting Interferer group 1
start_fio_jobs InterfererGroup1 0 default read MAX linear 1 1 0 0 4k /home/paolo/local-S/bandwidth-latency/../workfiles/largefile1
Starting Interferer group 2
start_fio_jobs InterfererGroup2 0 default read MAX linear 1 1 0 0 4k /home/paolo/local-S/bandwidth-latency/../workfiles/largefile2
Starting Interferer group 3
start_fio_jobs InterfererGroup3 0 default read MAX linear 1 1 0 0 4k /home/paolo/local-S/bandwidth-latency/../workfiles/largefile3
Starting Interferer group 4
start_fio_jobs InterfererGroup4 0 default read MAX linear 1 1 0 0 4k /home/paolo/local-S/bandwidth-latency/../workfiles/largefile4
Starting Interferer group 5
start_fio_jobs InterfererGroup5 0 default read MAX linear 1 1 0 0 4k /home/paolo/local-S/bandwidth-latency/../workfiles/largefile5
Starting Interferer group 6
start_fio_jobs InterfererGroup6 0 default read MAX linear 1 1 0 0 4k /home/paolo/local-S/bandwidth-latency/../workfiles/largefile6
Linux 5.3.0-rc6+ (paolo-ThinkPad-W520) 02/09/2019 _x86_64_ (8 CPU)
02/09/2019 21:39:11
Device tps MB_read/s MB_wrtn/s MB_read MB_wrtn
sda 66.53 5.22 0.10 1385 27
start_fio_jobs interfered 20 default randread MAX poisson 1 1 0 0 4k /home/paolo/local-S/bandwidth-latency/../workfiles/largefile_interfered0
02/09/2019 21:39:14
Device tps MB_read/s MB_wrtn/s MB_read MB_wrtn
sda 154.67 20.63 0.05 61 0
02/09/2019 21:39:17
Device tps MB_read/s MB_wrtn/s MB_read MB_wrtn
sda 453.00 64.27 0.00 192 0
02/09/2019 21:39:20
Device tps MB_read/s MB_wrtn/s MB_read MB_wrtn
sda 675.33 95.99 0.00 287 0
02/09/2019 21:39:23
Device tps MB_read/s MB_wrtn/s MB_read MB_wrtn
sda 1907.67 348.61 0.00 1045 0
02/09/2019 21:39:26
Device tps MB_read/s MB_wrtn/s MB_read MB_wrtn
sda 2414.67 462.98 0.00 1388 0
02/09/2019 21:39:29
Device tps MB_read/s MB_wrtn/s MB_read MB_wrtn
sda 2429.67 438.71 0.00 1316 0
02/09/2019 21:39:32
Device tps MB_read/s MB_wrtn/s MB_read MB_wrtn
sda 2437.00 475.79 0.00 1427 0
02/09/2019 21:39:35
Device tps MB_read/s MB_wrtn/s MB_read MB_wrtn
sda 2162.33 346.97 0.00 1040 0
Results for one rand reader against 7 seq readers (I/O depth 1), weight-none with weights: (default, default)
Aggregated throughput:
min max avg std_dev conf99%
64.27 475.79 319.046 171.233 1011.97
Read throughput:
min max avg std_dev conf99%
64.27 475.79 319.046 171.233 1011.97
Write throughput:
min max avg std_dev conf99%
0 0 0 0 0
Interfered total throughput:
min max avg std_dev
1.032 4.455 2.266 0.742696
Interfered per-request total latency:
min max avg std_dev
0.11 12.005 1.7545 0.878281
Thanks,
Paolo
> Thanks.
>
> --
> tejun
Powered by blists - more mailing lists