[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <1335477561-11131-1-git-send-email-tj@kernel.org>
Date: Thu, 26 Apr 2012 14:59:10 -0700
From: Tejun Heo <tj@...nel.org>
To: axboe@...nel.dk
Cc: vgoyal@...hat.com, ctalbott@...gle.com, rni@...gle.com,
linux-kernel@...r.kernel.org, cgroups@...r.kernel.org,
containers@...ts.linux-foundation.org, fengguang.wu@...el.com,
hughd@...gle.com, akpm@...ux-foundation.org
Subject: [PATCHSET] block: implement per-blkg request allocation
Hello,
Currently block layer shares a single request_list (@q->rq) for all
IOs regardless of their blkcg associations. This means that once the
shared pool is exhausted, blkcg limits don't mean much. Whoever grabs
the requests being freed the first grabs the next IO slot.
This priority inversion can be easily demonstrated by creating a blkio
cgroup w/ very low weight, put a program which can issue a lot of
random direct IOs there and running a sequential IO from a different
cgroup. As soon as the request pool is used up, the sequential IO
bandwidth crashes.
This patchset implements per-blkg request allocation so that each
blkcg-request_queue pair has its own request pool to allocate from.
This isolates different blkcgs in terms of request allocation.
Most changes are straight-forward; unfortunately, bdi isn't
blkcg-aware yet so it currently just propagates the congestion state
from root blkcg. As writeback currently is always on the root blkcg,
this kinda works for write congestion but readahead may behave
non-optimally under congestion for now. This needs to be improved but
the situation is still way better than blkcg completely collapsing.
0001-blkcg-fix-blkg_alloc-failure-path.patch
0002-blkcg-__blkg_lookup_create-doesn-t-have-to-fail-on-r.patch
0003-blkcg-make-root-blkcg-allocation-use-GFP_KERNEL.patch
0004-mempool-add-gfp_mask-to-mempool_create_node.patch
0005-block-drop-custom-queue-draining-used-by-scsi_transp.patch
0006-block-refactor-get_request-_wait.patch
0007-block-allocate-io_context-upfront.patch
0008-blkcg-inline-bio_blkcg-and-friends.patch
0009-block-add-q-nr_rqs-and-move-q-rq.elvpriv-to-q-nr_rqs.patch
0010-block-prepare-for-multiple-request_lists.patch
0011-blkcg-implement-per-blkg-request-allocation.patch
0001-0003 are assorted fixes / improvements which can be separated
from this patchset. Just sending as part of this series for
convenience.
0004 adds @gfp_mask to mempool_create_node(). This is necessary
because blkg allocation is on the IO path and now blkg contains
mempool for request_list. Note that blkg allocation failure doesn't
lead to catastrophic failure. It just hinders blkcg enforcement.
0005 drops custom queue draining which I dont't think is necessary and
hinders with further updates.
0006-0010 are prep patches and 0011 implements per-blkg request
allocation.
This patchset is on top of,
block/for-3.5/core bd1a68b59c "vmsplice: relax alignement requireme..."
+ [1] blkcg: tg_stats_alloc_lock is an irq lock
and is also available in the following git branch.
git://git.kernel.org/pub/scm/linux/kernel/git/tj/misc.git blkcg-rl
Jens, I still can't reproduce the boot failure you were seeing on
block/for-3.5/core, so am just basing this series on top. Once we
figure that one out, we can resequence the patches.
Thanks.
block/blk-cgroup.c | 147 ++++++++++++++++----------
block/blk-cgroup.h | 121 +++++++++++++++++++++
block/blk-core.c | 200 ++++++++++++++++++------------------
block/blk-sysfs.c | 34 +++---
block/blk-throttle.c | 3
block/blk.h | 3
block/bsg-lib.c | 53 ---------
drivers/scsi/scsi_transport_fc.c | 38 ------
drivers/scsi/scsi_transport_iscsi.c | 2
include/linux/blkdev.h | 53 +++++----
include/linux/bsg-lib.h | 1
include/linux/mempool.h | 3
mm/mempool.c | 12 +-
13 files changed, 379 insertions(+), 291 deletions(-)
--
tejun
[1] http://article.gmane.org/gmane.linux.kernel/1288400
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists