lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 20 Aug 2019 17:04:25 +0200
From:   Paolo Valente <paolo.valente@...aro.org>
To:     Tejun Heo <tj@...nel.org>
Cc:     Jens Axboe <axboe@...nel.dk>, newella@...com, clm@...com,
        Josef Bacik <josef@...icpanda.com>, dennisz@...com,
        Li Zefan <lizefan@...wei.com>,
        Johannes Weiner <hannes@...xchg.org>,
        linux-kernel <linux-kernel@...r.kernel.org>,
        linux-block <linux-block@...r.kernel.org>, kernel-team@...com,
        cgroups@...r.kernel.org, ast@...nel.org, daniel@...earbox.net,
        kafai@...com, songliubraving@...com, yhs@...com,
        bpf@...r.kernel.org
Subject: Re: [PATCHSET block/for-next] IO cost model based work-conserving
 porportional controller



> Il giorno 20 ago 2019, alle ore 12:48, Paolo Valente <paolo.valente@...aro.org> ha scritto:
> 
> 
> 
>> Il giorno 14 giu 2019, alle ore 19:56, Tejun Heo <tj@...nel.org> ha scritto:
>> 
>> On Thu, Jun 13, 2019 at 06:56:10PM -0700, Tejun Heo wrote:
>> ...
>>> The patchset is also available in the following git branch.
>>> 
>>> git://git.kernel.org/pub/scm/linux/kernel/git/tj/cgroup.git review-iow
>> 
>> Updated patchset available in the following branch.  Just build fixes
>> and cosmetic changes for now.
>> 
>> git://git.kernel.org/pub/scm/linux/kernel/git/tj/cgroup.git review-iow-v2
>> 
> 
> Hi Tejun,
> I'm running the kernel in your tree above, in an Ubuntu 18.04.
> 
> After unmounting the v1 blkio controller that gets mounted at startup
> I have created v2 root as follows
> 
> $ mount -t cgroup2 none /cgroup
> 
> Then I have:
> $ ls /cgroup
> cgroup.controllers  cgroup.max.descendants  cgroup.stat             cgroup.threads  io.weight.cost_model  system.slice
> cgroup.max.depth    cgroup.procs            cgroup.subtree_control  init.scope      io.weight.qos         user.slice
> 
> But the following command gives no output:
> $ cat /cgroup/io.weight.qos 
> 
> And, above all,
> $ echo 1 > /cgroup/io.weight.qos 
> bash: echo: write error: Invalid argument
> 
> No complain in the kernel log.
> 
> What am I doing wrong? How can I make the controller work?
> 

I made it, sorry for my usual silly questions (for some reason, I
thought the controller could be enabled globally by just passing a 1).

The problem now is that the controller doesn't seem to work.  I've
emulated 16 clients doing I/O on a SATA SSD.  One client, the target,
does random reads, while the remaining 15 clients, the interferers, do
sequential reads.

Each client is encapsulated in a separate group, but whatever weight
is assigned to the target group, the latter gets the same, extremely
low bandwidth.  I have tried with even the maximum weight ratio, i.e.,
1000 for the target and only 1 for each interferer.  Here are the
results, compared with BFQ (bandwidth in MB/s):

io.weight   BFQ
0.2         3.7

I ran this test with the script S/bandwidth-latency/bandwidth-latency.sh
of the S benchmark suite [1], invoked as follows:
sudo ./bandwidth-latency.sh -t randread -s none -b weight -n 15 -w 1000 -W 1

The above command simply creates groups, assigns weights as follows

echo 1 > /cgroup/InterfererGroup0/io.weight
echo 1 > /cgroup/InterfererGroup1/io.weight
...
echo 1 > /cgroup/InterfererGroup14/io.weight
echo 1000 > /cgroup/interfered/io.weight

and makes one fio instance generate I/O for each group.  The bandwidth
reported above is that reported by the fio instance emulating the
target client.

Am I missing something?

Thanks,
Paolo

[1] https://github.com/Algodev-github/S


> Thanks,
> Paolo
> 
>> Thanks.
>> 
>> -- 
>> tejun
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ