[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20080912.104228.30188313.k-ueda@ct.jp.nec.com>
Date: Fri, 12 Sep 2008 10:42:28 -0400 (EDT)
From: Kiyoshi Ueda <k-ueda@...jp.nec.com>
To: jens.axboe@...cle.com, agk@...hat.com,
James.Bottomley@...senPartnership.com, akpm@...ux-foundation.org
Cc: linux-kernel@...r.kernel.org, linux-scsi@...r.kernel.org,
dm-devel@...hat.com, j-nomura@...jp.nec.com, k-ueda@...jp.nec.com
Subject: [PATCH 03/13] mm: lld busy status exporting interface
This patch adds an interface to check lld's busy status
from the block layer. (scsi patch is also included.)
This resolves a performance problem on request stacking devices below.
Some drivers like scsi mid layer stop dispatching request when
they detect busy state on its low-level device like host/bus/device.
It allows other requests to stay in the I/O scheduler's queue
for a chance of merging.
Request stacking drivers like request-based dm should follow
the same logic.
However, there is no generic interface for the stacked device
to check if the underlying device(s) are busy.
If the request stacking driver dispatches and submits requests to
the busy underlying device, the requests will stay in
the underlying device's queue without a chance of merging.
This causes performance problem on burst I/O load.
With this patch, busy state of the underlying device is exported
via the state flag of queue's backing_dev_info. So the request
stacking driver can check it and stop dispatching requests if busy.
The underlying device driver must set/clear the flag appropriately:
ON: when the device driver can't process requests immediately.
OFF: when the device driver can process requests immediately,
including abnormal situations where the device driver needs
to kill all requests.
Signed-off-by: Kiyoshi Ueda <k-ueda@...jp.nec.com>
Signed-off-by: Jun'ichi Nomura <j-nomura@...jp.nec.com>
Cc: James Bottomley <James.Bottomley@...senPartnership.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>
---
include/linux/backing-dev.h | 8 ++++++++
mm/backing-dev.c | 13 +++++++++++++
2 files changed, 21 insertions(+)
Index: 2.6.27-rc6/include/linux/backing-dev.h
===================================================================
--- 2.6.27-rc6.orig/include/linux/backing-dev.h
+++ 2.6.27-rc6/include/linux/backing-dev.h
@@ -26,6 +26,7 @@ enum bdi_state {
BDI_pdflush, /* A pdflush thread is working this device */
BDI_write_congested, /* The write queue is getting full */
BDI_read_congested, /* The read queue is getting full */
+ BDI_lld_congested, /* The device/host is busy */
BDI_unused, /* Available bits start here */
};
@@ -226,8 +227,15 @@ static inline int bdi_rw_congested(struc
(1 << BDI_write_congested));
}
+static inline int bdi_lld_congested(struct backing_dev_info *bdi)
+{
+ return bdi_congested(bdi, 1 << BDI_lld_congested);
+}
+
void clear_bdi_congested(struct backing_dev_info *bdi, int rw);
void set_bdi_congested(struct backing_dev_info *bdi, int rw);
+void clear_bdi_lld_congested(struct backing_dev_info *bdi);
+void set_bdi_lld_congested(struct backing_dev_info *bdi);
long congestion_wait(int rw, long timeout);
Index: 2.6.27-rc6/mm/backing-dev.c
===================================================================
--- 2.6.27-rc6.orig/mm/backing-dev.c
+++ 2.6.27-rc6/mm/backing-dev.c
@@ -279,6 +279,19 @@ void set_bdi_congested(struct backing_de
}
EXPORT_SYMBOL(set_bdi_congested);
+void clear_bdi_lld_congested(struct backing_dev_info *bdi)
+{
+ clear_bit(BDI_lld_congested, &bdi->state);
+ smp_mb__after_clear_bit();
+}
+EXPORT_SYMBOL_GPL(clear_bdi_lld_congested);
+
+void set_bdi_lld_congested(struct backing_dev_info *bdi)
+{
+ set_bit(BDI_lld_congested, &bdi->state);
+}
+EXPORT_SYMBOL_GPL(set_bdi_lld_congested);
+
/**
* congestion_wait - wait for a backing_dev to become uncongested
* @rw: READ or WRITE
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists