[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <4C5073F3.1060406@vlnb.net>
Date: Wed, 28 Jul 2010 22:16:19 +0400
From: Vladislav Bolkhovitin <vst@...b.net>
To: Tejun Heo <tj@...nel.org>
CC: Bryan Mesich <bryan.mesich@...u.edu>,
scst-devel@...ts.sourceforge.net,
Jens Axboe <jens.axboe@...cle.com>,
linux-kernel@...r.kernel.org, linux-raid@...r.kernel.org,
dm-devel@...hat.com
Subject: RAID/block regression starting from 2.6.32, bisected
Hello,
In recent kernels we are experiencing a problem that in our setup using SCST BLOCKIO backend some BIOs are finished, i.e. the finish callback called for them, with error -EIO. It happens quite often, much more often than one would expect to have an actual IO error. (BLOCKIO backend just converts all incoming SCSI commands to the corresponding block requests.)
After some investigation, we figured out, that, most likely, raid5.c::make_request() for some reason sometimes calls bio_endio() with not BIO_UPTODATE bios.
We bisected it to commit:
commit a82afdfcb8c0df09776b6458af6b68fc58b2e87b
Author: Tejun Heo <tj@...nel.org>
Date: Fri Jul 3 17:48:16 2009 +0900
block: use the same failfast bits for bio and request
bio and request use the same set of failfast bits. This patch makes
the following changes to simplify things.
* enumify BIO_RW* bits and reorder bits such that BIOS_RW_FAILFAST_*
bits coincide with __REQ_FAILFAST_* bits.
* The above pushes BIO_RW_AHEAD out of sync with __REQ_FAILFAST_DEV
but the matching is useless anyway. init_request_from_bio() is
responsible for setting FAILFAST bits on FS requests and non-FS
requests never use BIO_RW_AHEAD. Drop the code and comment from
blk_rq_bio_prep().
* Define REQ_FAILFAST_MASK which is OR of all FAILFAST bits and
simplify FAILFAST flags handling in init_request_from_bio().
Signed-off-by: Tejun Heo <tj@...nel.org>
Signed-off-by: Jens Axboe <jens.axboe@...cle.com>
After looking at it I can't see how it can lead to the effect we are experiencing. Could anybody comment on this, please? Is it a known problem?
The error can be only reproduced when running RAID 5. The general layout is:
Disks --> RAID5 --> LVM --> BLOCKIO VDISK
The problem is easy to reproduce by forcing the RAID 5 array to re-sync its members, eg just fail out one member and add it back into the array and then generate some IO using dd. In fact, just writing out to the partition table on the exported block device is usually enough to provoke the error.
The complete thread about the topic you can find in http://sourceforge.net/mailarchive/forum.php?thread_name=20100727220110.GF31152%40atlantis.cc.ndsu.nodak.edu&forum_name=scst-devel
If any additional information is needed we would be glad to provide it.
Thanks,
Vlad
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists