lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <cover.1453308862.git.shli@fb.com>
Date:	Wed, 20 Jan 2016 09:49:16 -0800
From:	Shaohua Li <shli@...com>
To:	<linux-kernel@...r.kernel.org>
CC:	<axboe@...nel.dk>, <tj@...nel.org>, <vgoyal@...hat.com>,
	<jmoyer@...hat.com>, <Kernel-team@...com>
Subject: [RFC 0/3] block: proportional based blk-throttling

Hi,

Currently we have 2 iocontrollers. blk-throttling is bandwidth based. CFQ is
weight based. It would be great there is a unified iocontroller for the two.
And blk-mq doesn't support ioscheduler, leaving blk-throttling the only option
for blk-mq. It's time to have a scalable iocontroller supporting both
bandwidth/weight based control and working with blk-mq.

blk-throttling is a good candidate, it works for both blk-mq and legacy queue.
It has a global lock which is scaring for scalability, but it's not terrible in
practice. In my test, the NVMe IOPS can reach 1M/s and I have all CPU run IO. Enabling
blk-throttle has around 2~3% IOPS and 10% cpu utilization impact. I'd expect
this isn't a big problem for today's workload. This patchset then try to make a
unified iocontroller. I'm leveraging blk-throttling.

The idea is pretty simple. If we know disk total bandwidth, we can calculate
cgroup bandwidth according to its weight. blk-throttling can use the calculated
bandwidth to throttle cgroup. Disk total bandwidth changes dramatically per IO
pattern. Long history is meaningless. The simple algorithm in patch 1 works
pretty well when IO pattern changes.

This is a feedback system. If we underestimate disk total bandwidth, we assign
less bandwidth to cgroup. cgroup will dispatch less IO and finally lower disk
total bandwidth is estimated. To break the loop, cgroup bandwidth calculation
always uses (1 + 1/8) * disk_bandwidth. Another issue is cgroup could be
inactive. If inactive cgroup is accounted in, other cgroup will be assigned
less bandwidth and so dispatch less IO, and disk total bandwidth drops further.
To avoid the issue, we periodically check cgroups and exclude inactive ones.

To test this, create two fio jobs and assign them different weight. You will
see the jobs have different bandwidth roughly according to their weight.

Comments and benchmarks are welcome!

Thanks,
Shaohua

Shaohua Li (3):
  block: estimate disk bandwidth
  blk-throttling: weight based throttling
  blk-throttling: detect inactive cgroup

 block/blk-core.c       |  49 ++++++++++++
 block/blk-sysfs.c      |  13 ++++
 block/blk-throttle.c   | 198 ++++++++++++++++++++++++++++++++++++++++++++++++-
 include/linux/blkdev.h |   4 +
 4 files changed, 263 insertions(+), 1 deletion(-)

-- 
2.4.6

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ