lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160120190535.GC10553@redhat.com>
Date:	Wed, 20 Jan 2016 14:05:35 -0500
From:	Vivek Goyal <vgoyal@...hat.com>
To:	Shaohua Li <shli@...com>
Cc:	linux-kernel@...r.kernel.org, axboe@...nel.dk, tj@...nel.org,
	jmoyer@...hat.com, Kernel-team@...com
Subject: Re: [RFC 0/3] block: proportional based blk-throttling

On Wed, Jan 20, 2016 at 09:49:16AM -0800, Shaohua Li wrote:
> Hi,
> 
> Currently we have 2 iocontrollers. blk-throttling is bandwidth based. CFQ is
> weight based. It would be great there is a unified iocontroller for the two.
> And blk-mq doesn't support ioscheduler, leaving blk-throttling the only option
> for blk-mq. It's time to have a scalable iocontroller supporting both
> bandwidth/weight based control and working with blk-mq.
> 
> blk-throttling is a good candidate, it works for both blk-mq and legacy queue.
> It has a global lock which is scaring for scalability, but it's not terrible in
> practice. In my test, the NVMe IOPS can reach 1M/s and I have all CPU run IO. Enabling
> blk-throttle has around 2~3% IOPS and 10% cpu utilization impact. I'd expect
> this isn't a big problem for today's workload. This patchset then try to make a
> unified iocontroller. I'm leveraging blk-throttling.
> 
> The idea is pretty simple. If we know disk total bandwidth, we can calculate
> cgroup bandwidth according to its weight. blk-throttling can use the calculated
> bandwidth to throttle cgroup. Disk total bandwidth changes dramatically per IO
> pattern. Long history is meaningless. The simple algorithm in patch 1 works
> pretty well when IO pattern changes.
> 
> This is a feedback system. If we underestimate disk total bandwidth, we assign
> less bandwidth to cgroup. cgroup will dispatch less IO and finally lower disk
> total bandwidth is estimated. To break the loop, cgroup bandwidth calculation
> always uses (1 + 1/8) * disk_bandwidth. Another issue is cgroup could be
> inactive. If inactive cgroup is accounted in, other cgroup will be assigned
> less bandwidth and so dispatch less IO, and disk total bandwidth drops further.
> To avoid the issue, we periodically check cgroups and exclude inactive ones.
> 
> To test this, create two fio jobs and assign them different weight. You will
> see the jobs have different bandwidth roughly according to their weight.

Patches look pretty small. Nice to see an implementation which will work
with faster devices and get away from dependency on cfq.

How does one switch between weight based vs bandwidth based throttling?
What's the default. 

So this has been implemented at throttling layer. By default is weight 
based throttling enabled or one needs to enable it explicitly.

What's the performance impact of new weight based throttling.

Thanks
Vivek

> 
> Comments and benchmarks are welcome!
> 
> Thanks,
> Shaohua
> 
> Shaohua Li (3):
>   block: estimate disk bandwidth
>   blk-throttling: weight based throttling
>   blk-throttling: detect inactive cgroup
> 
>  block/blk-core.c       |  49 ++++++++++++
>  block/blk-sysfs.c      |  13 ++++
>  block/blk-throttle.c   | 198 ++++++++++++++++++++++++++++++++++++++++++++++++-
>  include/linux/blkdev.h |   4 +
>  4 files changed, 263 insertions(+), 1 deletion(-)
> 
> -- 
> 2.4.6

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ