lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:	Sun, 29 Nov 2009 21:59:07 -0500
From:	Vivek Goyal <vgoyal@...hat.com>
To:	linux-kernel@...r.kernel.org, jens.axboe@...cle.com
Cc:	nauman@...gle.com, dpshah@...gle.com, lizf@...fujitsu.com,
	ryov@...inux.co.jp, fernando@....ntt.co.jp, s-uchida@...jp.nec.com,
	taka@...inux.co.jp, guijianfeng@...fujitsu.com, jmoyer@...hat.com,
	righi.andrea@...il.com, m-ikeda@...jp.nec.com, vgoyal@...hat.com,
	czoccolo@...il.com, Alan.Brunelle@...com
Subject: Block IO Controller V4

Hi Jens,

This is V4 of the Block IO controller patches on top of "for-2.6.33" branch
of block tree.

A consolidated patch can be found here:

http://people.redhat.com/vgoyal/io-controller/blkio-controller/blkio-controller-v4.patch


Changed from V3:
- Removed group_idle tunable and introduced group_isolation tunable. Thanks
  to corrodo for the idea and thanks to Alan for testing and reporting
  performance issues with random reads.

  Generally if random reads are put in separate groups, these groups get
  exclusive access to disk and we drive lower queue depth and performance
  drops. So by default now random queues are moved to root group hence
  performance drop due to idling on each group's sync-noidle tree is less. 

  If one wants stronger isolation/fairness for random IO, he needs to set
  group_isolation=1 and that will also result in performance drop if group
  does not have enough IO going on to keep disk busy.

- Got rid of wait_busy() function in select_queue(). Now I increase the
  slice length of a queue by one slice_idle period to give it a chance
  to get busy before it gets expired so that group does not lose share. This
  has simplified the logic a bit. Thanks again to corrodo for the idea.

- Introduced a macro "for_each_cfqg_st" to travese through all the service
  trees of a group.
  
- Now async workload share is calculated based on system wide busy queues
  and not just based on queues in root group.

- allow async queue preemption in root group by sync queues in other groups.
  
Changed from V2:
- Made group target latency calculations in proportion to group weight
  instead of evenly dividing the slice among all the groups.

- Modified cfq_rb_first() to check "count" and return NULL if service tree
  is empty.

- Did some reshuffling in patch order. Moved Documentation patch to the end.
  Also moved group idling patch down the order.

- Fixed the "slice_end" issue raised by Gui during slice usage calculation.
  
Changes from V1:

- Rebased the patches for "for-2.6.33" branch. 
- Currently dropped the support for priority class of groups. For the time
  being only BE class groups are supported.
 
After the discussions at IO minisummit at Tokyo, Japan, it was agreed that
one single IO control policy at either leaf nodes or at higher level nodes
does not meet all the requirements and we need something so that we have
the capability to support more than one IO control policy (like proportional
weight division and max bandwidth control) and also have capability to
implement some of these policies at higher level logical devices.

It was agreed that CFQ is the right place to implement time based proportional
weight division policy. Other policies like max bandwidth control/throttling
will make more sense at higher level logical devices.

This patch introduces blkio cgroup controller. It provides the management
interface for the block IO control. The idea is that keep the interface
common and in the background we should be able to switch policies based on
user options. Hence user can control the IO throughout the IO stack with
a single cgroup interface.

Apart from blkio cgroup interface, this patchset also modifies CFQ to implement
time based proportional weight division of disk. CFQ already does it in flat
mode. It has been modified to do group IO scheduling also.

IO control is a huge problem and the moment we start addressing all the
issues in one patchset, it bloats to unmanageable proportions and then nothing
gets inside the kernel. So at io mini summit we agreed that lets take small
steps and once a piece of code is inside the kernel and stablized, take the
next step. So this is the first step.

Some parts of the code are based on BFQ patches posted by Paolo and Fabio.

Your feedback is welcome.

TODO
====
- Direct random writers seem to be very fickle in terms of workload
  classification. They seem to be switching between sync-idle and sync-noidle
  workload type in a little unpredictable manner. Debug and fix it.

- Support async IO control (buffered writes).

 Buffered writes is a beast and requires changes at many a places to solve the
 problem and patchset becomes huge. Hence first we plan to support only sync
 IO in control then work on async IO too.

 Some of the work items identified are.

	- Per memory cgroup dirty ratio
	- Possibly modification of writeback to force writeback from a
	  particular cgroup.
	- Implement IO tracking support so that a bio can be mapped to a cgroup.
	- Per group request descriptor infrastructure in block layer.
	- At CFQ level, implement per cfq_group async queues.	

  In this patchset, all the async IO goes in system wide queues and there are
  no per group async queues. That means we will see service differentiation
  only for sync IO only. Async IO willl be handled later.

- Support for higher level policies like max BW controller.
- Support groups of RT class also.

Thanks
Vivek

 Documentation/cgroups/blkio-controller.txt |  135 +++++
 block/Kconfig                              |   22 +
 block/Kconfig.iosched                      |   17 +
 block/Makefile                             |    1 +
 block/blk-cgroup.c                         |  312 ++++++++++
 block/blk-cgroup.h                         |   90 +++
 block/cfq-iosched.c                        |  901 +++++++++++++++++++++++++---
 include/linux/cgroup_subsys.h              |    6 +
 include/linux/iocontext.h                  |    4 +
 9 files changed, 1401 insertions(+), 87 deletions(-)
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ