[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <1169657989.3037.15.camel@mulgrave.il.steeleye.com>
Date: Wed, 24 Jan 2007 08:59:49 -0800
From: James Bottomley <James.Bottomley@...elEye.com>
To: Ed Lin <ed.lin@...mise.com>
Cc: linux-scsi <linux-scsi@...r.kernel.org>,
linux-kernel <linux-kernel@...r.kernel.org>,
jeff <jeff@...zik.org>, promise_linux <promise_linux@...mise.com>
Subject: Re: [patch] scsi: use lock per host instead of per device for
shared queue tag host
On Tue, 2007-01-23 at 16:53 -0800, Ed Lin wrote:
> The block layer uses lock to protect request queue. Every scsi device
> has a unique request queue, and queue lock is the default lock in struct
> request_queue. This is good for normal cases. But for a host with
> shared queue tag (e.g. stex controllers), a queue lock per device means
> the shared queue tag is not protected when multiple devices are accessed
> at a same time. This patch is a simple fix for this situation by introducing
> a host queue lock to protect shared queue tag. Without this patch we will
> see various kernel panics (including the BUG() and kernel errors in
> blk_queue_start_tag and blk_queue_end_tag of ll_rw_blk.c) when accessing
> different devices simultaneously (e.g. copying big file from one device to
> another in smp kernels).
This patch looks OK in principle.
However, are you sure you're not using a sledgehammer to crack a nut?
If the only reason you're doing this is because of the shared tag map,
then probably that should be the area you protect with a per-tag-map
lock. The net effect of what you've done will be to serialise all
accesses to your storage devices. For a small number of devices, this
probably won't matter than much, but for large numbers of devices,
you're probably going to introduce artificial performance degredation in
the I/O scheduler.
James
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists