[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <1305009600.21534.587.camel@debian>
Date: Tue, 10 May 2011 14:40:00 +0800
From: "Alex,Shi" <alex.shi@...el.com>
To: jaxboe@...ionio.com, James.Bottomley@...senpartnership.com
Cc: "Li, Shaohua" <shaohua.li@...el.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Perfromance drop on SCSI hard disk
commit c21e6beba8835d09bb80e34961 removed the REENTER flag and changed
scsi_run_queue() to punt all requests on starved_list devices to
kblockd. Yes, like Jens mentioned, the performance on slow SCSI disk was
hurt here. :) (Intel SSD isn't effected here)
In our testing on 12 SAS disk JBD, the fio write with sync ioengine drop
about 30~40% throughput, fio randread/randwrite with aio ioengine drop
about 20%/50% throughput. and fio mmap testing was hurt also.
With the following debug patch, the performance can be totally recovered
in our testing. But without REENTER flag here, in some corner case, like
a device is keeping blocked and then unblocked repeatedly,
__blk_run_queue() may recursively call scsi_run_queue() and then cause
kernel stack overflow.
I don't know details of block device driver, just wondering why on scsi
need the REENTER flag here. :)
James, do you have some idea on this.
Regards
Alex
======
diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
index e9901b8..24e8589 100644
--- a/drivers/scsi/scsi_lib.c
+++ b/drivers/scsi/scsi_lib.c
@@ -432,8 +432,11 @@ static void scsi_run_queue(struct request_queue *q)
&shost->starved_list);
continue;
}
-
- blk_run_queue_async(sdev->request_queue);
+ spin_unlock(shost->host_lock);
+ spin_lock(sdev->request_queue->queue_lock);
+ __blk_run_queue(sdev->request_queue);
+ spin_unlock(sdev->request_queue->queue_lock);
+ spin_lock(shost->host_lock);
}
/* put any unprocessed entries back */
list_splice(&starved_list, &shost->starved_list);
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists